Tag Archives: analytics

Key trends for efficient analytics (1) : a web taxonomy

In the wake of the most recent MeasureCamp, held in London on March 14th, I have decided to start a series about key trends in analytics.

During the MeasureCamp, beyond the usual lot of (highly interesting) technical workshops, I have identified three trends that are increasingly important for digital practitioners, so as to improve the efficiency of analytics usage, be it for a better understanding of the clients or for a smoother experience of analytics within the organization.

These three trends are:

  1. using a taxonomy to ensure your website is performing correctly
  2. defining a balance between ethics and privacy, when coping with regulations
  3. drawing a path for your analysts to improve their engagement and their relevance

I shall tackle the first topic today, i.e. taxonomy.

taxonomy LinnéThis topic has been a constant interest in my career, namely when I came to work on digital data. I even participated in the development of a patent dealing with this subject (System and method for automated classification of web pages and domains). And I believe that it is of the highest importance to improve the way your data are collected, and hence the efficiency of your analytics.

To have a good analytics output, it is not enough to have a good Tag management, an efficient SEO strategy or a good budget for ad words (even though all three are blatantly necessary). The optimization of analytics starts with a sound website, aligned with your strategy, properly coded and suitably organized to answer key questions about your users and your customers.

There are two key factors of success, e.g. organizing the whole site in full accordance with your original objectives, and aligning the organization with the experience of the site users.

Aligning the site with the strategy is a blatant point. But not an easy one! The strategy may be altered often, when new products are launched, when fashion trends are evolving, when clients are upgrading (or downgrading) their expectations… But one seldom can afford to change the structure of the site all that often. And throughout time, your site may not be aligned with the company’s goals any more, at least not in all its parts.

The first reaction will be to run bunches of A/B testing to find local improvements, new layouts, better priorities, but this will be short-term fixes, while the tide moves slowly away…

Thinking ahead is definitely better on the long-term, and a proper taxonomy gives you the flexibility to be able to align your websites key measurement points with a volatile strategy. Why? Because a taxonomy is a frame, a container organizer, whereas working solely on the key words, on the “thesaurus” as Heather Hedden says, is a one-shot work.

And it never is too late. Your website does not need to be fully revamped, so as to comply with a taxonomy. You just have to re-think the organization of your internal data, and align key concepts with your SEO strategy, as is very well described in this blog post – “Developing a Website Taxonomy to Benefit SEO” – by Jonathan Ellins from Hallam Internet, a UK-based consultancy. I have inserted one of the graphs used within their post below, showing the kind of decision tree a taxonomy may be generating:

hedges-topicsThe interesting thing in this post is that it does not only focus on content, but also on what they call “intent” (I personally call it “context”), which opens the door to alternative ways of organizing one’s website data for improving analytics in the end.

This brings me to the second factor of success, considering the user experience.

Beyond intent, there is a much broader scope, i.e. how the user is experiencing the website, and the way he/she browses all the way to the eagerly wanted conversion. The usual way to handle properly such users, is to determine typical personae.

personaeThe personae are very useful here, as they not only show the various ways of navigating into the website, but also allow the identification of key crossings, loops and dead-ends, which are clearly signs of a website that is not aligned with the users’ expectations. And a flexible concept like taxonomy would offer the opportunity to alter the logical links between two pages, so as to modify the browsing in such a way that the user would find his way more easily.

In conclusion, it certainly is not easy to revamp one’s site on a regular basis, and not easier to change one’s data management system all too often. In this respect, a taxonomy applied to your website may offer enough flexibility to cope with this ever-changing world, so that you may provide ongoing sensible analytics to your stakeholders, even when they are all too often changing their moods…

Should you be interested to develop such a taxonomy, or at least to discuss about the relevance of such an enhancement, I would gladly be your man. I may be contacted here.

Next week, I shall discuss the impacts of ethics and privacy rules on analytics. Stay tuned!

PS : For those interested in a much deeper approach of the taxonomy concepts, I recommend Heather Hedden’s blog, “The Accidental Taxonomist“. Beware, not for rookies!

Data Elicitation in three steps (3/3): Data Analytics

Today is wrap-up time. Before dealing with practical use cases and comment on data-related news, I shall conclude my introduction series with its third part,analytics. And now, you have your data set, well-organized (patterned) and trained (enriched), ready to go. You only need to find the proper tactics and strategy to reach your goal, i.e. get them to talk and find the solution to your issues or validate your assumptions.

How is this analytical work? Sometimes complicated. Often critical. Always consuming.

Let us first ask ourselves a few questions:

  • Is my software adapted and scaled for reaching my business goals? → needless to say, although… a nice database requires an efficient interface
  • What type of tools and techniques may I use for getting the best out of them? → data drill down and funnel analysis is not equal to random search
  • What are the patterns within my data? → how to reach global conclusions from exemplary data excerpts
  • By the way, do I know why I am storing so much data? → small data sets are often more useful than big (fat and unformed) data mounds

In fact, and even though one may build databases hundreds of ways, using very different tools and techniques, there are only two ways to analyze data, Mine Digging, and Stone Cutting.

1. Mine Digging

Mine Digging is the typical initial Big Data work. Barren ground, nothing valuable to be seen, there may be something worth digging, but you are not even sure… This is often what Big Data offers at first look. Actually, a well-seasoned researcher will find out if something interesting is hidden in the data, like an experienced geologist deciphering the ground would guess if any stone could be buried in there. Still, excavating the data to reach more promising levels is a lot of work, often named “drill down”, an interesting parallel to the mining work… And something will be found for sure, but more often stones of lesser value, a Koh-i-noor is not always there, unfortunately… This huge excavating work I have named Mine Digging.

Retailer-vs-Competition analysisI have taken an example from a previous Market Research experience to illustrate this.

Digging in the data is cumbersome. No question. You seek for hours for a valid analysis angle before finding one. And then suddenly, something. A story to tell, a recommendation to share, a result of some previously taken action to validate. A gem in the dark.

The attached chart shows the example of a given retailer (A) compared to its competitors in a specific catchment area. Its underperformance being so blatant, it was heading unfortunately to a shutdown; however, we found additional data, and suggested an assortment reshuffle and a store redesign which finally helped retailer (A) to catch up with town average in less than 18 months.

This example shows how one may drill data down to the lowest level (shop/town) from a full retailer panel, so as to find meaningful insights

2. Stone Cutting

Stone Cutting has more to do with on-going analytics, especially those taken from existing software, be they for digital analytics, data mining, or even semantic search. In this respect, one already has some raw material in hands, but its cutting is depending on current conditions and client wishes… The analytic work, in that respect, is to find out how to carve the stone and give it the best shape to maximize its value. This refinery work is what I name Stone Cutting.

Click-Rate analysisI have chosen an example from the web analytics world to illustrate this.

When optimizing an e-commerce website, one very quickly knows the type of action that triggers improved conversion; the analytics will then “only” provide some marginal information, e.g. what this specific campaign has brought to the company’s business, its ROI, its new visitors and buyers. Very important for the business, for sure. Vital even.

The attached example shows for instance that the efficiency of the banner-ad impression (click-rate per impression at more or less 5 per thousand) is stable up to 5 impressions, beyond this point, additional impressions are less efficient.

Information straight to the point with results also immediately actionable.

So two ways of analyzing data, one requiring heavy work and a lot of patience, the other one rather using brains and existing patterns, but both are necessary for efficient analytics, from breakthrough discoveries to fine-tuned reporting. Diggers in the South-African mines and cutters in the Antwerp jewelry shops are two very different populations, but both of them are necessary to create a diamond ring. For data analytics alike, a global in-depth knowledge of the analytical process is required, so as to offer the best of consultancy. So, let me remind you , my dual experience  in marketing and operations is a real nugget that you can get easily either on a part-time or full-time basis.

Analytics without cookies? My follow-up to #MeasureCamp IV

As mentioned in my previous post “Giving up cookies for a new internet… The third age of targeting is at your door.“, I have attended the fourth Measure Camp in London (http://www.measurecamp.org), on March 29th. And my (voluntarily controversial) topic has been: “Web Analytics without cookies?

The subject has been introduced by the following three charts, a short introduction to what I expected to be a discussion, and a hot one it has been!

Measure Camp IV (post)

Basically, the discussion has been getting around three topics:

  • Are really cookies going to disappear, and if yes which ones and how?
  • Are cookies disapproved by the users because of their lack of privacy or rather because of some all-too aggressive third-party cookie strategies?
  • Are there any solutions, and when do we need them at last?

Topic number 1 definitely is the most controversial. It already is difficult to imagine how to deal without what has been the basics of collection, targeting and analysis. On top of this, some valid objections also have been given, such as the necessity to keep first-party cookies for a decent browsing experience as well as the request from a fair share of the users to keep ads, providing they were relevant to them. A very good follow-up has been brought by James Sandoval (Twitter: @checkyourfuel) and the BrightTag team. Thanks to them for their inputs.

Clearly, the participants were all agreeing that a cookie ban would only impact third-party ones, and occur for political reasons (maybe not before 3 to 5 years), lest a huge privacy scandal ignites an accelerated decision process. Still, a fair amount of the internet revenue would then be imperiled.

At this stage, there still remains the acceptance of cookies by the users. There is a wide consensus within the digital community that people browsing the internet accept a reasonable amount of cookie intrusion in their lives, should this generate relevant ads. Actually, I think this view is biased, as nobody has ever asked whether people would rather browse with or without ads… The question always has been between”wild” and “reasoned” ad targeting… It reminds me of an oil company asking if car drivers would rather tank diesel or lead-free, not allowing “electricity” as a valid answer…

So the question of cookie acceptance remains open in my eyes, and this may be a key driver to designing alternative solutions.

What options do we have at hand then?

The first and blatant one is a better regulation of third-party cookies, especially the ability of the user to master how, when and with whom their first-party cookies could and should be shared in an opt-in mode. The law (in the EU) theoretically rules this (see EU rules about cookie consent here), through a warning to the user about cookies, when he or she opens a new website. Still, national transcriptions and various ways of web page developments have made this law non-understandable, and mostly not actionable on a global basis.

A first step would then be to abide by the user’s choice, and give him the ability to manage his or her own cookies, sharing some, all or none of them with third-parties, as they wish. A difficult task, especially when nearly 30 government bodies are to be implied… So why not investigate non-cookie options?

In London, I have introduced two possible ways:

  1. Create a unique Id for each user, somewhat like Google’s unique Id, but managed by an independent body. My suggestion is that such an Id should belong to the whole community, like HTML or HTTP… A huge task.
  2. The other idea is mine… It would consist of the generation of anonymized profiles, based on browsing patterns. This idea I shall develop more in detail in future posts, but the idea is worth thinking, especially when one imagines that today’s user mood may not be tomorrow’s, and require a very dynamic targeting methodology…

So this hot discussion on cookies at least has initiated discussions among the digital community. It also proved that such fresh (and sometimes idealistic) views as mine are necessary to keep the digital community staying on the edge of innovation. So stay tuned, I shall go on providing food for thought so as to “shake the tree” of Measurement…

Giving up cookies for a new internet… The third age of targeting is at your door.

While preparing next week’s Measure Camp in London (http://www.measurecamp.org), I had been wondering what would be the most interesting topic in my eyes. And my question is: “How would Web Analytics work without cookies?

Actually, last year, in September, I had read an interesting post by Laurie Sullivan, posted on the MediaPost.com site: “Where The Next Ad-Targeting Technology Might Come From“. This had been the core of my thoughts for the past months, so I wanted to elaborate on Laurie’s post so as to introduce my own ideas about this topic.

I personally believe that the mean of collecting information from the web users through cookies is fading and soon to disappear. There are many reasons for this, including the user privacy concerns, the lack of contextuality of the cookie as well as the development of multiple access point and devices, that render such a data collection highly hazardous.

The disappearance of cookies would have an impact on at least three areas: data collection, targeting and analytics.

  • Data collection is highly based on cookies, especially when dealing with ad exposure and browsing habits. High impact.
  • Targeting is also based on cookies, as most tools use history to handle their most likely customers. High impact.
  • Analytics are also using cookies, especially for site-centric analysis as well as various page-level analysis. High impact.

Considering the high impacts, time has come for a more contextual and more behavioral targeting. We are now entering the third age of targeting. The first age had been based on sociodemographics, widely used by TV Ads or direct post mailing. The second age has been based on using past behavior to predict potential future actions, and, in internet, is widely using cookies to pursue this goal. The third age will be the age of context, targeting anonymous users with current common interests.

How will it work? One possible way: we would use network log files (provided by ISP’s or Telco’s) to collect data, organize these data with a categorization at various levels and through multiple dimensions so as to generate rich but heterogeneous user clusters and hence allow targeting of potential customers based on ad-hoc inputs. I shall elaborate in further posts, especially regarding the process, but the main advantage is the respect of privacy, especially thanks to cookie avoidance…

cookie-monster-quotes-saying-cute-funny-sesame-street-3_large

So, yes, giving up cookies may be difficult; this is why I believe we ought to prepare to go on a diet as of today…

And act for alternative methodologies instead of shouting “me want cookies!”