Category Archives: Analytics

Key trends for efficient analytics (1) : a web taxonomy

In the wake of the most recent MeasureCamp, held in London on March 14th, I have decided to start a series about key trends in analytics.

During the MeasureCamp, beyond the usual lot of (highly interesting) technical workshops, I have identified three trends that are increasingly important for digital practitioners, so as to improve the efficiency of analytics usage, be it for a better understanding of the clients or for a smoother experience of analytics within the organization.

These three trends are:

  1. using a taxonomy to ensure your website is performing correctly
  2. defining a balance between ethics and privacy, when coping with regulations
  3. drawing a path for your analysts to improve their engagement and their relevance

I shall tackle the first topic today, i.e. taxonomy.

taxonomy LinnéThis topic has been a constant interest in my career, namely when I came to work on digital data. I even participated in the development of a patent dealing with this subject (System and method for automated classification of web pages and domains). And I believe that it is of the highest importance to improve the way your data are collected, and hence the efficiency of your analytics.

To have a good analytics output, it is not enough to have a good Tag management, an efficient SEO strategy or a good budget for ad words (even though all three are blatantly necessary). The optimization of analytics starts with a sound website, aligned with your strategy, properly coded and suitably organized to answer key questions about your users and your customers.

There are two key factors of success, e.g. organizing the whole site in full accordance with your original objectives, and aligning the organization with the experience of the site users.

Aligning the site with the strategy is a blatant point. But not an easy one! The strategy may be altered often, when new products are launched, when fashion trends are evolving, when clients are upgrading (or downgrading) their expectations… But one seldom can afford to change the structure of the site all that often. And throughout time, your site may not be aligned with the company’s goals any more, at least not in all its parts.

The first reaction will be to run bunches of A/B testing to find local improvements, new layouts, better priorities, but this will be short-term fixes, while the tide moves slowly away…

Thinking ahead is definitely better on the long-term, and a proper taxonomy gives you the flexibility to be able to align your websites key measurement points with a volatile strategy. Why? Because a taxonomy is a frame, a container organizer, whereas working solely on the key words, on the “thesaurus” as Heather Hedden says, is a one-shot work.

And it never is too late. Your website does not need to be fully revamped, so as to comply with a taxonomy. You just have to re-think the organization of your internal data, and align key concepts with your SEO strategy, as is very well described in this blog post – “Developing a Website Taxonomy to Benefit SEO” – by Jonathan Ellins from Hallam Internet, a UK-based consultancy. I have inserted one of the graphs used within their post below, showing the kind of decision tree a taxonomy may be generating:

hedges-topicsThe interesting thing in this post is that it does not only focus on content, but also on what they call “intent” (I personally call it “context”), which opens the door to alternative ways of organizing one’s website data for improving analytics in the end.

This brings me to the second factor of success, considering the user experience.

Beyond intent, there is a much broader scope, i.e. how the user is experiencing the website, and the way he/she browses all the way to the eagerly wanted conversion. The usual way to handle properly such users, is to determine typical personae.

personaeThe personae are very useful here, as they not only show the various ways of navigating into the website, but also allow the identification of key crossings, loops and dead-ends, which are clearly signs of a website that is not aligned with the users’ expectations. And a flexible concept like taxonomy would offer the opportunity to alter the logical links between two pages, so as to modify the browsing in such a way that the user would find his way more easily.

In conclusion, it certainly is not easy to revamp one’s site on a regular basis, and not easier to change one’s data management system all too often. In this respect, a taxonomy applied to your website may offer enough flexibility to cope with this ever-changing world, so that you may provide ongoing sensible analytics to your stakeholders, even when they are all too often changing their moods…

Should you be interested to develop such a taxonomy, or at least to discuss about the relevance of such an enhancement, I would gladly be your man. I may be contacted here.

Next week, I shall discuss the impacts of ethics and privacy rules on analytics. Stay tuned!

PS : For those interested in a much deeper approach of the taxonomy concepts, I recommend Heather Hedden’s blog, “The Accidental Taxonomist“. Beware, not for rookies!

Data Elicitation in three steps (3/3): Data Analytics

Today is wrap-up time. Before dealing with practical use cases and comment on data-related news, I shall conclude my introduction series with its third part,analytics. And now, you have your data set, well-organized (patterned) and trained (enriched), ready to go. You only need to find the proper tactics and strategy to reach your goal, i.e. get them to talk and find the solution to your issues or validate your assumptions.

How is this analytical work? Sometimes complicated. Often critical. Always consuming.

Let us first ask ourselves a few questions:

  • Is my software adapted and scaled for reaching my business goals? → needless to say, although… a nice database requires an efficient interface
  • What type of tools and techniques may I use for getting the best out of them? → data drill down and funnel analysis is not equal to random search
  • What are the patterns within my data? → how to reach global conclusions from exemplary data excerpts
  • By the way, do I know why I am storing so much data? → small data sets are often more useful than big (fat and unformed) data mounds

In fact, and even though one may build databases hundreds of ways, using very different tools and techniques, there are only two ways to analyze data, Mine Digging, and Stone Cutting.

1. Mine Digging

Mine Digging is the typical initial Big Data work. Barren ground, nothing valuable to be seen, there may be something worth digging, but you are not even sure… This is often what Big Data offers at first look. Actually, a well-seasoned researcher will find out if something interesting is hidden in the data, like an experienced geologist deciphering the ground would guess if any stone could be buried in there. Still, excavating the data to reach more promising levels is a lot of work, often named “drill down”, an interesting parallel to the mining work… And something will be found for sure, but more often stones of lesser value, a Koh-i-noor is not always there, unfortunately… This huge excavating work I have named Mine Digging.

Retailer-vs-Competition analysisI have taken an example from a previous Market Research experience to illustrate this.

Digging in the data is cumbersome. No question. You seek for hours for a valid analysis angle before finding one. And then suddenly, something. A story to tell, a recommendation to share, a result of some previously taken action to validate. A gem in the dark.

The attached chart shows the example of a given retailer (A) compared to its competitors in a specific catchment area. Its underperformance being so blatant, it was heading unfortunately to a shutdown; however, we found additional data, and suggested an assortment reshuffle and a store redesign which finally helped retailer (A) to catch up with town average in less than 18 months.

This example shows how one may drill data down to the lowest level (shop/town) from a full retailer panel, so as to find meaningful insights

2. Stone Cutting

Stone Cutting has more to do with on-going analytics, especially those taken from existing software, be they for digital analytics, data mining, or even semantic search. In this respect, one already has some raw material in hands, but its cutting is depending on current conditions and client wishes… The analytic work, in that respect, is to find out how to carve the stone and give it the best shape to maximize its value. This refinery work is what I name Stone Cutting.

Click-Rate analysisI have chosen an example from the web analytics world to illustrate this.

When optimizing an e-commerce website, one very quickly knows the type of action that triggers improved conversion; the analytics will then “only” provide some marginal information, e.g. what this specific campaign has brought to the company’s business, its ROI, its new visitors and buyers. Very important for the business, for sure. Vital even.

The attached example shows for instance that the efficiency of the banner-ad impression (click-rate per impression at more or less 5 per thousand) is stable up to 5 impressions, beyond this point, additional impressions are less efficient.

Information straight to the point with results also immediately actionable.

So two ways of analyzing data, one requiring heavy work and a lot of patience, the other one rather using brains and existing patterns, but both are necessary for efficient analytics, from breakthrough discoveries to fine-tuned reporting. Diggers in the South-African mines and cutters in the Antwerp jewelry shops are two very different populations, but both of them are necessary to create a diamond ring. For data analytics alike, a global in-depth knowledge of the analytical process is required, so as to offer the best of consultancy. So, let me remind you , my dual experience  in marketing and operations is a real nugget that you can get easily either on a part-time or full-time basis.

Analytics without cookies? My follow-up to #MeasureCamp IV

As mentioned in my previous post “Giving up cookies for a new internet… The third age of targeting is at your door.“, I have attended the fourth Measure Camp in London (http://www.measurecamp.org), on March 29th. And my (voluntarily controversial) topic has been: “Web Analytics without cookies?

The subject has been introduced by the following three charts, a short introduction to what I expected to be a discussion, and a hot one it has been!

Measure Camp IV (post)

Basically, the discussion has been getting around three topics:

  • Are really cookies going to disappear, and if yes which ones and how?
  • Are cookies disapproved by the users because of their lack of privacy or rather because of some all-too aggressive third-party cookie strategies?
  • Are there any solutions, and when do we need them at last?

Topic number 1 definitely is the most controversial. It already is difficult to imagine how to deal without what has been the basics of collection, targeting and analysis. On top of this, some valid objections also have been given, such as the necessity to keep first-party cookies for a decent browsing experience as well as the request from a fair share of the users to keep ads, providing they were relevant to them. A very good follow-up has been brought by James Sandoval (Twitter: @checkyourfuel) and the BrightTag team. Thanks to them for their inputs.

Clearly, the participants were all agreeing that a cookie ban would only impact third-party ones, and occur for political reasons (maybe not before 3 to 5 years), lest a huge privacy scandal ignites an accelerated decision process. Still, a fair amount of the internet revenue would then be imperiled.

At this stage, there still remains the acceptance of cookies by the users. There is a wide consensus within the digital community that people browsing the internet accept a reasonable amount of cookie intrusion in their lives, should this generate relevant ads. Actually, I think this view is biased, as nobody has ever asked whether people would rather browse with or without ads… The question always has been between”wild” and “reasoned” ad targeting… It reminds me of an oil company asking if car drivers would rather tank diesel or lead-free, not allowing “electricity” as a valid answer…

So the question of cookie acceptance remains open in my eyes, and this may be a key driver to designing alternative solutions.

What options do we have at hand then?

The first and blatant one is a better regulation of third-party cookies, especially the ability of the user to master how, when and with whom their first-party cookies could and should be shared in an opt-in mode. The law (in the EU) theoretically rules this (see EU rules about cookie consent here), through a warning to the user about cookies, when he or she opens a new website. Still, national transcriptions and various ways of web page developments have made this law non-understandable, and mostly not actionable on a global basis.

A first step would then be to abide by the user’s choice, and give him the ability to manage his or her own cookies, sharing some, all or none of them with third-parties, as they wish. A difficult task, especially when nearly 30 government bodies are to be implied… So why not investigate non-cookie options?

In London, I have introduced two possible ways:

  1. Create a unique Id for each user, somewhat like Google’s unique Id, but managed by an independent body. My suggestion is that such an Id should belong to the whole community, like HTML or HTTP… A huge task.
  2. The other idea is mine… It would consist of the generation of anonymized profiles, based on browsing patterns. This idea I shall develop more in detail in future posts, but the idea is worth thinking, especially when one imagines that today’s user mood may not be tomorrow’s, and require a very dynamic targeting methodology…

So this hot discussion on cookies at least has initiated discussions among the digital community. It also proved that such fresh (and sometimes idealistic) views as mine are necessary to keep the digital community staying on the edge of innovation. So stay tuned, I shall go on providing food for thought so as to “shake the tree” of Measurement…

Data Privacy, between a rock and a hard place

How are we to handle Data Privacy? Through goodwill, as original free internet promoters would like to? Or through coercive regulation measures, as government bodies are prone to? This definitely is no easy dilemma…

The Marketing Mobile Association in France has been willing to put the question on the table, last Wednesday (Feb 12th), on the very same day when the US were having the so-called “safer internet day”. The meeting venue was more on the goodwill side, as the event has been hosted by the Mozilla Foundation in their Paris office. A nice place, by the way, see for yourself…

Mozilla Meeting Room

The discussion panel was more balanced, with Etienne Drouard attorney at K&L Gates, specialized in Privacy matters, and Geoffrey Delcroix, CNIL Innovation Director (CNIL being the French Internet Regulatory Body), as well as Hervé Le Jouan, CEO of Privowny, and Tristan Nitot, Principal Evangelist Mozilla Europe (a brilliant coffee brewer as well…), the whole thing being moderated by Bruno Perrin, Media & Entertainment Leader at EY.

Between tools to manage oneself’s privacy (see my own selection at the bottom of this post) and various comments to the Privacy Laws, the main impression that remains from this panel discussion is that handling Data Privacy is like walking on a tight rope…

Two opposite views are currently cleaving the internet:

  • On one side, the “libertarian” internet promoters, with their concepts based on freedom as wide as possible (net neutrality, open data, open source, etc…), whose view of privacy is linked to each person individual right to protect one’s privacy. A global “do-not-track” by default would certainly please them, especially if companies are to respect it forcefully…
  • On another side, at the opposite of the scope, we have the state bodies, willing to set more control on the internet, as this is something that they do not only misunderstand, but also fear; in this respect, they wish to instate regulations, privacy by design, control over content, etc…

And, in the middle, the so-called “new economy”, all these companies and people trying to make a sensible use of the internet… Not easy, mmh? What I understood very clearly from the panel discussion is that none of the extreme behaviors depicted above would give internet a chance. Setting “do-not-track” by default would simply lead companies to ignore it, and hence kill the idea. And on the other side, regulating the market by law would technically make it die, in the end. Hence, the tight rope strategy is the only one that remains, with a difficult balance between market freedom and people’s protection, between business and privacy…

So what are we left with? We can try to manage our own privacy, and ensure it does not go beyond the borders we have set. Nobody lives in a cave with no contact to the outside any more (as this would probably be the only way to fully protect one’s privacy…). But nobody wants to live constantly under the eyes of watchers, as in a personal Truman Show, especially when your information is wanted for their business… We may go on using internet, conscious that we are watched, but managing this, and knowingly give our consent wherever we believe it makes sense, blocking all other non-sollicited requests…

There are many tools to do so. Probably too many. I personally use five.

  1. An ad-blocker: this is not a must have, but it may be useful , especially to speed up your browsing. I use AdBlock, a Chrome extension. The disadvantage of this, is that most ad-blockers do not offset the changes in the layout of the website, making it sometimes barely readable (as for instance my favorite sport page, Sport24). And do not forget that most sites earn their money thanks to the ads… So I disable it now and then, especially when visiting sites with less audience.
  2. A user/password manager: this is highly interesting, to ensure you know what and where you have been logging in, and ensure nobody is using some of your identities without you knowing it. I am using the Privowny tool bar, a very useful add-on.
  3. An identity verifier: this is for Twitter in particular. To avoid being followed (and spammed) by robots and fake followers, I am using TrueTwit, a simple (and not so expensive) tool to filter and verify any Twitter user. I have less followers now, but only real people…
  4. A do-not-track option: I also use, now and then, the do-not-track feature in my browser (Chrome). This I do especially when shopping or banking online, so as to minimize the amount of cookies shared by these companies that also own very personal information of mine. I know, this is more a wishful thinking, but at least shows that I am not ready to let everything leak.
  5. A graphical cookie tracer: I have uploaded CookieViz from the CNIL website, a free software to visualize your browsing, and the cookies that have been shared with third parties. At least, when you browse websites, including your favorite ones, you know what you are at… Below a short description of this tool (currently only available for Windows OS, soon to come for Mac and Unix).

CookieViz example

The picture shows a session for 7 browsed sites (9 views total). The 7 websites are “circled” with red pentagons. Up right is Sport24 (link provided above), below e-commerce website CDiscount.fr and information website LeMonde.fr.

At the bottom, from right to left, a gaming website BigPoint.com, my About.me profile and this blog’s dashboard page. In the middle, Avinash Kaushik’s blog (Occam’s Razor), showing that even the blog of a respected digital evangelist like Avinash may share third-party cookies…

The graph is, I believe, self-explanatory; the visited websites (red pentagons) are generating cookies (all blue round spots), which are kept for first-party usage (blue links) or shared with third-party (red links). To be clear, I have disabled the AdBlock to generate this graph, so as to prevent partial representation.

This tool is highly interesting in my eyes. It does not block anything, but shows you everything. At least, the user knows what happens when he/she visits a website, and may decide to go on browsing, or choose alternatives websites with a better sharing policy, especially regarding third-party cookies.

A better informed customer always makes better choices.

Data Elicitation : my professional new start in 2014

As you could read it last week in Revival of a digital non-nativeI am now more qualified than ever in Digital Analytics, ready to write the first pages of my professional new start.

It has been very nice to receive a high number of positive feedback and to state the concrete interest my last post has aroused. As announced last week, I shall now elaborate what I am at. This post defines the core business of Data Elicitation. Further ones (in a series of 3) will give much more details about specific contributions closely linked with my own proficiency, and answering concerns expressed by marketers, namely through this study by StrongView (2014 Marketing Survey). Key areas are:

1. Data patterning

The original sin of Big Data is its formlessness. So as to be able to use these data, and get the best out of them, one must organize and structure them first. This is what patterning is about.

Or course, your engineers will claim they have built the best database ever, and that it should answer any question you have. This may be true. Or not. Actually, many databases are built under technical constraints, with very little regard to usage and user experience, let alone to marketing and strategy needs. My own experience testifies that an efficient use of data is first built upon a correct understanding of the client requests, i.e. that the initial step is not drawing the plan, but thinking about how it would best fit its goal. This always has been a key driver of my action, especially when building up various new services in the marketing information business. I am a resolute data patterner.

2. Data enrichment

Your data are rich, especially if you can use them easily thanks to an appropriate patterning. But they certainly can be richer. Much richer. And most certainly not at high costs. This is what enrichment is all about.

You may have tons of data, and still this may not fit your purposes. Or on the contrary have small databases, but with a very high (and maybe hidden) value. And enriching is not only adding external information, it is also deriving, translating, cross-checking existing sources. Market Research companies used to name this data enrichment process “coding the dictionary”, a phrase showing the vastness and complexity of this process.  Getting the relevance out of the data is definitely a precious skill, and one of my own key proficiencies.

3. Data analytics

Now, your data are accessible and usable. Fine. And what next? Getting the best out of your data is not always easy, as the meaning may either be blurred or the solution to the problem lost as a needle in a haystack. This is what analytics are about.

And once your data are fit for use, you need to find the proper tactics and strategy to reach your goal, i.e. get them to talk and find the solution to your issues or validate your assumptions. This requires a fair analytic technique, but also a good flair for identifying where the gems are hidden… In this respect, as a seasoned market research expert with a solid digital background, I shall help you identify where to dig to get the best out of your data.

So in the end, this whole process of patterning, enriching and analyzing data may be summarized under one single word: elicitation. I have chosen Data Elicitation as an umbrella designation for running all these processes and bringing them together as a service.

On a practical level, my door remains open to any CEO who would require my exclusive working force to set up their data marketing corporate strategy (e.g. hire me). Still, the current market conditions, notably in France, imply that flexibility is key, especially in the light of project-driven action. This is why I offer my (invaluable) resource also as a contractor. So? Drown in data? Or searching them desperately? And in need of elicitation? Let us keep in touch and let 2014 be the year for your ultimate data elicitation!