Author Archive

The Problem With Attribution

July 17th, 2015

Repost of my Data Driven Thinking byline published by AdExchanger

In recent months we’ve heard some noise about the problems with using multi-touch attribution to measure and optimize ad spend (see articles in Adexchanger and Digiday).  Some claim attribution is flawed due to the presence of non-viewable ads in user conversion paths. Others say attribution does not prove causality and should therefore be disregarded.

My view is that these naysayers are either painting with too big of a brush or they’re missing the canvas altogether.

Put The Big Brush Away 

broad-brushThe universe of attribution vendors, tools and approaches is large and diverse. You can’t take a broad-brushed approach to describe what they do.

If the critics are referring to static attribution models offered by ad servers and site analytics platforms, such as last touch, first touch, U-shaped, time-based and even weighting, I would agree that these are flawed because of the presence of non-viewable ads. Including every impression and click and arbitrarily allocating credit will do more harm than good. But if they’re referring to legitimate, algorithmic attribution solutions, they clearly don’t understand how things work.

First, not all attribution tools include every impression when modeling conversion paths. Occasionally, non-viewable impressions can be excluded from the data set via outputs from the ad server or a third-party viewability vendor. For the majority of cases where impression-level viewability is not available, there are proven approaches to excluding and/or discounting the vast majority of non-viewable ads. Non-viewable ads and viewable, low-quality ads almost always have a very high frequency among converters, serving 50, 100 or more impressions to retargeted users. By excluding the frequency outliers from the data set, you eliminate a very high percentage of non-viewable ads. You also exclude most viewable ads of suspect quality.

Second, unlike static models, machine-learning models are designed to reward ads that contribute and discount ads that are in the path but are not influencing outcomes. As cookie bombing is not very efficient, with lots of wasted impressions of questionable value, they are typically devalued by good algorithmic attribution models.

By excluding frequency outliers and using machine-learning models to allocate fractional credit, attribution can separate much of the signal from the noise, even the noise you can’t see. And while algorithmic attribution does not necessarily prove causality, a causal inference can be achieved by adding a control group. While not perfect, it’s more than sufficient for helping advertisers optimize spend.

You Missed The Entire Canvas

paint-on-childrenComplaining that attribution models are not accurate enough is like chiding Monet for being less precise than Picasso, especially when many advertisers are still painting with their fingers.

It’s easy to split hairs and poke holes in attribution, viewability, brand safety, fraud prevention, device bridging, data unification and other essential ad-tech solutions. But the absence of a bulletproof solution is not a valid reason to continue relying on last century’s metrics, such as click-through rates and converting clicks.

As Voltaire, Confucius and Aristotle said in their own ways, “Perfect is the enemy of good.”
Ironically, so is click-based attribution.

While no one claims to have all the answers with 100% accuracy, fractional attribution modeling can improve media performance over last-click and static models. And while not every advertiser can be the next Van Gogh, they can use the tools and data that exist today to get a solid “A” in art class.

The Picture We Should Be Painting
I’m a big fan of viewability tools and causality studies, and I’m an advocate for incorporating both into attribution models. I am not a fan of throwing stones based on inaccurate or theoretical arguments.
Every campaign should use tools to identify fraud, non-viewable ads and suspect placements. The outputs from these tools should be inputs to attribution models, and every advertiser should carve out a small budget for testing. While this is an idealistic picture, it may not be too far away. As the industry matures, capabilities are integrated and advertisers, including agencies and brands, learn to use the tools, we will get closer to marketing Nirvana.

In the mean time, advertisers should continue to make gradual improvement in how they serve, measure and optimize media. Even if it’s not perfect, every step counts.

puzzle-paintingAd-tech companies should remember we’re all part of an interdependent ecosystem. We need to work together to help advertisers get more from their media budgets. And we all need to have realistic expectations. From a measurement perspective, the industry will always be in catch-up mode, trying to validate the shiny new objects being created by media companies.

All that said, we can do much more today than only one year ago. We’ll continue to make progress. Advertisers will be more successful. And that will be good for everyone.

Steve Latham
@stevelatham

Shedding Light Beneath the Attribution Canopy

May 22nd, 2015

adexchanger_logoAdexchanger recently published a timely article “Breaking through the Attribution Canopy” on the Attribution marketplace (view it on Encore’s facebook page). Overall they did a good job of highlighting the conflicts of interest that are inherent when your media vendor is also your trusted source of insights.  They also touched on the emergence of new solutions that are designed to address the needs of the larger market.   Along with other industry executives, I was quoted in the interview.

During the interview, we discussed a lot of issues surrounding media attribution and optimization.  But as with any interview, only a few of my comments were published.  To provide some context and clarify our POV, here are the key takeaways:

  • We are glad to see that Attribution has (finally) reached a tipping point.  Brands, agencies, DSPs and media platforms are scrambling to leverage machine-based insights to optimize media spend.  Continuing to rely on last-touch KPIs for is simply a lazy and irresponsible approach to measuring media.
  • We believe measurement, analysis and optimization decisions should be driven by the advertiser, its agency or an independent solution provider, not its media vendor.  Even if the fox is friendly, it shouldn’t be in the hen house.
  • We also believe data should be easily ported, integrated and made available for analysis, regardless of who sells the media or who serves the ads.  Openness, transparency and portability are not only ideological values; they also make business sense.
  • The growing concentration of power of leading media and technology vendors should be on everyone’s radar as a threat to transparency and openness.  If you look at the markets for programmatic display, video advertising, search, social marketing, mobile advertising* and ad serving, the dominant players are making it difficult and expensive to independently analyze their data in the context of other media. The path to marketing and advertising success does not end in a walled garden.
  •  To date, advanced insights (e.g. algorithmic attribution and data-driven optimization tools) have been reserved for the largest advertisers who can afford six-figure price tags.  As the article points out, there is a large unmet need beyond the top 200 advertisers.  To address the needs of the thousands of middle market advertisers, a new model (no pun intended) is needed.  Heavy, expensive and service-intensive solutions cannot scale across the broader market.  The next phase of adoption will be won by light and agile solutions that are affordable and easy to implement.
  • To deliver modeled insights at scale, the solution must be automated, efficient, flexible and customizable for each advertiser.  It should also be affordable.  On this point, we wholeheartedly agree with Forrester’s Tina Moffett “I think one advantage [attribution start-ups] do have is they were able to see the market needs and where the gaps were … and where existing players were falling short.”

For these reasons, we are very excited about the prospects for innovators who are able to address unmet needs for the large and growing middle market.

*For more on my quote that Google gets half of all mobile ad dollars, please see the emarketer report published earlier this year.

As always, thanks for reading and feel free to share comments or contact me if you have any questions.

Steve Latham
@stevelatham

The Value of Data: Our POV on Verizon-AOL

May 19th, 2015

AdAge
I was recently interviewed by Advertising Age on the data angle of the Verizon-AOL deal (read the AdAge article).  While still fresh in my mind, I thought I’d share our POV.

First, Verizon already has unprecedented insight into what people are doing:

  • They know which device is speaking to its network and the packets and requests to each device (i.e. they capture all data sent and received)
  • They record every user session (e.g. using an app, typing an email or browsing the web)
  • Whether you’re using Verizon’s wireless service or local wifi to access the Internet, all data is captured
  • They track online behavior via cookies and relationships with 3rd parties (e.g. AOL owned Huffington Post)
  • They connect devices to each other and to desktops and households better than anyone else.
  • We believe Verizon already collects data than other providers

Acquiring AOL gives Verizon the ability to analyze the data and use it for advanced targeting of digital media.  While AOL’s sites (e.g. HuffPo) have some value, the real value is selling the ability to do advanced audience targeting to advertisers through AOL’s programmatic buying platforms for advertisers and publishers.  In short, Verizon has the diamond mines and AOL provides the mining equipment, sales and distribution.

And Verizon’s diamonds will have superior cut and clarity than what today’s competitors can offer as it can provide deeper insight to customer behavior across platforms and devices.  While competitors are trying to stitch together the pieces from the outside in, Verizon has already bolted them together from the inside out.

Not to say that AOL’s data isn’t valuable too.  In recent years AOL has done a great job of developing a very large proprietary data platform.  The Verizon deal will enhance AOL’s data in numerous ways:

  • Expand the reach of users
  • Expand the data on each user: demo, geo, behavioral, etc.
  • Enable better multi-platform / device bridging
  • Improve resolution and accuracy

So at the end of the day, this deal is about mining all that data and converting it into revenue.  As noted in the AdAge article, they will need to do this very carefully and responsibly. Verizon has a spotty record among privacy advocates so it would be smart to proceed with caution.

Thanks for your time and interest.  I look forward to your comments.

Steve Latham
@stevelatham


fbEncore TwitterContact-us3c

Investing Confidently (and Safely) in Programmatic

March 28th, 2015

sparkle chartOver the past few years, we’ve spent a lot of time advising Brands and Agencies on the challenges and risks associated with Programmatic buying (which for this post will encompass exchange traded media, RTB, etc.).  While the idea of machine-based buying is exciting, it’s not without significant challenges and risks.  Having analyzed dozens of programmatic campaigns, we’ve found that a blind leap into Programmatic is almost always a costly endeavor.  The thesis for taking a smart approach to programmatic buying is summarized below:

  • While the promise of self-optimizing buying is intriguing, it doesn’t replace the need for objective, rational analysis.
  • Programmatic optimization is typically based on a broken model.  The continued reliance on clicks, post-click and post-view metrics may do more harm than good.
  • Algorithmic attribution is critical for measuring and optimizing media.  Fractional, statistical analysis is needed for accurate and impactful cross-channel, full-funnel insights.
  • As brands shift more of their budgets to programmatic, the need for objective, attribution-based insights will become even more critical

I recently put documented some of the key lessons learned to produce the embedded Presentation: “Investing Confidently in Programmatic“.  I thought about calling it “How to Avoid Wasting Half of Your Media Budget” but opted for the more positive spin. Either would be sufficiently accurate.

In it, I address some of the risks and challenges of Programmatic buying, along with recommendations for ensuring a successful investment in this rapidly changing arena.  Also included is a SWOT analysis to frame the strengths, weaknesses, opportunities and threats that advertisers must deal with to be successful in this new area of machine-based buying.

As always, your comments and questions are welcome – just post!  If you’d like a copy of this presentation please contact us.

Steve Latham
@stevelatham


Encore TwitterContact-us3cfb

The Growing Need for Device Bridging

December 1st, 2014

tapad joined

As an industry, we are quickly moving to a “mobile first” world where mobile engagement is becoming an increasingly important part of the customer journey in most considered purchases. From a targeting standpoint, digital publishers have done a decent job of assembling the components to engage individuals across desktops, laptops, mobile phones and tablets. But on the measurement side of the spectrum, marketers are way behind the curve.

Defining the Problem
Traditional platforms and measurement tools use cookies or alternative IDs to track the behavior of each browser as an individual user. As consumers are increasingly using multiple devices as part of their everyday lives, single-screen tracking provides a very limited view of user-level engagement. Even within your mobile device, it’s difficult to connect your mobile browser and applications, meaning most publishers and advertisers won’t know you’re the same person who saw an in-app ad before navigating to their site through a search on your mobile browser. When you factor in the use of tablets, desktops and laptops, the challenge becomes even more complex. Let’s use the following example to illustrate the challenge:

Suppose you see an ad for TravelX (fictitious travel site) on your iPhone’s mobile browser and you remember you need to book a plane ticket home for the holidays. So you go to the App store and download the TravelX app and book a flight home. Unfortunately, TravelX will not know that the mobile ad you saw on Safari drove the purchase you just made on their mobile App. To them, you appear to be two different users.

In your confirmation email is an ad for a great rate on a rental car. You see it on your phone and make a mental note to do some research tomorrow. While at work the following day you visit the Brand X travel site on your laptop to check out rates. You’re not ready to buy but you’ve definitely shown interest and are deep in the funnel. But unless you sign-in using the same ID as your mobile app, Brand X will not know you are the same person who just booked a flight through their app. Again, they will classify you as a unique user.

To sum up, you responded to TravelX’s ad, downloaded their app, booked a flight, and are now considering a rental car.  While Google, Facebook and others may help them retarget you on your laptop and mobile device, TravelX won’t know how to accurately attribute credit for the conversions.  Given the gaps in “traditional” digital measurement, TravelX doesn’t know its integrated media plan is working so well.  They are still wondering what caused you to download their app in the first place while questioning the value of the seemingly ineffective mobile browser ad that got your attention, but not your click.

tapad uniques

This example illustrates the challenges marketers face in creating a unified view of each customer (even on the sites you frequent).  If you’re like the majority of advertisers who do not require users to authenticate, the challenge of measuring the source of new conversions is even harder.

Beyond the bridging problem, we also see a lot of advertisers who rely on their publishers to serve mobile ads and report the results of the campaign.  As with site-served desktop ads, this leaves a lot of unanswered questions about the true reach, frequency and timing of ads being served and/or rendered.

Solving the Device Bridging Problem

In recent years some innovative companies have emerged to address the gaps in device bridging and data unification.  A few of them are even independent of your media buy, which is very important (let’s keep the foxes out of the hen house). While no one solution offers the silver bullet to address this problem, point solutions are available to:

  • Serve ads to mobile browsers, using a device ID vs. cookies to track user engagement
  • Bridge mobile browsers to in-app engagement
  • Bridge mobile devices to other devices, desktops and laptops used by the same individual
  • Connect device IDs to a universal user ID that can be used for online and offline CRM efforts

tapad connectedBy bridging device-specific data at the user level, advertisers can connect all impressions, visits and conversions associated with each customer journey, regardless of platform or device.  For those brands going head-first into mobile, this will be required to truly understand reach, frequency and cross-platform synergies for each campaign.

Armed with these insights, marketers can plan more effectively, reduce waste and optimize spend while better understanding how and when consumers engage with their brand.  Without these insights, there will be some lingering questions about the relative contribution, impact and ROI from each digital media buy.

(Images courtesy of Tapad)

As always, comments are welcome!

Steve Latham
@stevelatham


Encore Twitter
Contact-us3c

Encore Launches New Attribution Platform

November 13th, 2014

Encore’s Customer-Driven Solution Makes Algorithmic Optimization Easy and Affordable 

New York City —- Encore Media Metrics, a pioneer in digital attribution, today announced the launch of its new cloud-based analytics platform. With the introduction, Encore is making advanced measurement and impactful optimization easy, efficient and affordable for brands and agencies of all sizes.

According to Tina Moffett of Forrester, “Marketers need guidance on development, management and analysis of complex marketing efforts. Attribution vendors need to deliver what customers are looking for: turning insights into action, calculating accurate performance metrics, providing a holistic view of the customer purchase path, and taming the messiness of big data.”

Leveraging six years of research and innovation, Encore designed its platform to address these unmet customer needs, delivering clear and compelling insights that are readily available and easy to act on – without data overload. Beyond automated ingestion, algorithmic modeling and visualization of paid, earned and owned media, Encore’s new platform provides powerful insights, and prescriptive recommendations for reducing waste and optimizing spend. Encore’s platform enables customers to make real-time updates and modifications directly in the web-based User Interface, providing unique personalization and efficiency. Best of all, Encore’s solution is easy to implement, easy to use and priced to fit budgets of all sizes.

“Encore’s new platform delivers robust analysis, insights and recommendations marketers need to measure and optimize campaigns with confidence, said Gunnard Johnson, SVP Analytics, Centro. The new Interface delivers the insights that matter without overwhelming us with data. They’ve done a great job of delivering statistically modeled outputs while giving us control and flexibility in how the insights are presented.  I believe it’s a smart and affordable solution that fills the gap in today’s marketplace.”

“We wanted to make advanced analytics easy and efficient for brands, agencies and partners, said Steve Latham, founder and CEO of Encore. “Therefore, we had to develop a scalable, machine-based solution that would provide clear and compelling insights, along with actionable recommendations for optimizing spend. We also had to simplify the user experience by reducing the complexity and level of effort typically required by a customer. Lastly, we had to make it affordable for the majority of market.”

Encore’s new cloud-based platform aggregates and analyzes conversion path data using proven models for attributing fractional credit to each interaction along the way. Programmatic reports, insights and recommendations for optimizing spend are presented in a customizable User Interface that allows each customer to personalize how insights are presented and acted on. A predictive analysis module forecasts future results and the expected lift in performance from optimizing on Encore’s insights.

“Encore’s platform delivers real savings and improved returns for our clients” said Kunick Kapadia, Channel Analytics Supervisor, The Gate Worldwide. “Analyzing the true impact of each media channel is an enormous undertaking, but Encore’s real-time dashboard unlocks these data points and turns them into actionable insights. They have more than paid for themselves through the savings and enhanced ROI as a result of our partnership. I would recommend Encore as the go-to solution for clients of all sizes.”

Learn more about Encore’s new Attribution Platform.

Observations on the Attribution Market

July 7th, 2014


chart-blue
The market for Attribution companies has definitely heated up with high profile acquisitions by Google and AOL.  I view these transactions as strong proof points that brands and their agencies are starving for advanced data-driven insights to optimize their investments in digital media.  The same thesis that led us to start Encore several years ago still holds true today: traditional metrics are no longer sufficient and advanced insights are needed to truly understand what works, what doesn’t, and how to improve ROI from marketing dollars.

Over the years we’ve analyzed more than 100 brand and agency campaigns of all sizes – from the largest CPG companies in the world, to emerging challengers in Retail, Automotive, Travel and B2B.  Based on these experiences, here are 5 observations that I’ll share today:

  1. We are still early in the adoption curve.   While many brands and agencies have invested in pilots and proofs of concept, enterprise-wide (or agency-wide) adoption of fractional attribution metrics is still relatively low, and the big growth curve is still ahead of us.  About 18 months ago I wrote about 2013 being the year Attribution Crosses the Chasm.  I now see I was a bit early in my prediction – 2014 is clearly the year Attribution grows up.
  2. There is still some confusion about who should “own” cross-channel / full-funnel attribution.  Historically brands have delegated media measurement to their agencies.  We now see brands taking on a more active role in deciding how big data is used to analyze and inform media buys.  And as the silos are falling, the measurement needs of the advertiser often transcend the purview of their media agency.  In my opinion, responsibility for measurement of Paid, Owned and Earned media will increasingly shift from the agencies to the brands they serve.  This is already the case for many CPG companies we serve.  In measuring media for more than a dozen big consumer brands, we’re seeing the in-house teams setting direction and strategy, while agencies play a supporting role in the measurement and optimization process.  We’re happy to work with either side; they just need to decide who owns the responsibility for insights.
  3. Multi-platform measurement is coming, but not as fast as you might think.  We are big believers in the need for device bridging and multi-platform measurement and are working with great companies like Tapad to address the unmet need of unifying data to have a more comprehensive view of customer engagement.  To date we’ve presented Device Bridging POVs to most of our customers.  And while are interested in this subject, very few will invest this year.  It’s not that the demand isn’t there – it will just take some time to mature.
  4. Marketers need objective and independent insights – now more than ever.  Despite increasing efforts by big media companies to bundle analytics with their media, the days of relying on a media vendor to tell you how well their ads performed are limited.  It’s fine to get their take of how they contributed to your business goals, but agencies and brands need objective 3rd party insights to validate the true impact of each media buy.  And with the growing reliance on exchange-traded media and machine-based decisioning, objective, expert analysis is needed more than ever to de-risk spend and improve ROI.   We’ve found this approach works well – especially in days like these where it’s all about sales.  This leads to my 4th observation…
  5. In the end it’s about Sales.  While digital KPIs are great for measuring online engagement, we’re seeing more and more interest in connecting digital engagement to offline sales.  Again, we’re fortunate to work with great partners like (m)PHASIZE to connect the dots and show the true impact of digital spend on offline sales.  We’re also working on opportunities with LiveRamp and Mastercard to achieve similar goals.  Like device bridging, I see this becoming more of a must-have in 2015, but it’s good to have the conversations today.

There is so much more to discuss and I’m sure our market will continue to iterate and evolve quickly.  But to sum it up, it’s an exciting time to be in the digital media measurement space. Attribution is finally coming of age and it’s going to be a hell of a ride for the next few years.

As always, comments are welcome!

Steve Latham
@stevelatham


Encore Twitter
Contact-us3c

 

 

Inefficiencies of Exchange Traded Media

January 21st, 2014

Encore’s latest POV on Inefficiencies of Exchange Traded Media was published by AdExchanger on January 21, 2014.  You can read the article on AdExchanger or read the POV below.

While exchange-traded media is praised for bringing  efficiency to the display ad market, a deeper dive reveals numerous factors are costing advertisers billions of dollars in wasted  spend.  While programmatic buying is relatively efficient (compared to other media), on absolute basis, a lot of wasted spend generally goes unnoticed.

Research shows that perverse incentives, a lack of controls and limited use of advanced analytical tools have made a majority of exchange-traded media worthless.  While we advance how we buy and sell media, there is still significant room for improvement in the quality and economic returns from real-time bidding (RTB).

 

Where Waste Occurs

Optimizing media starts with eliminating wasted spending. In the RTB world, waste can take many forms:

  • burningmoneyFraud: Either 1x1s sold into exchanges to generate revenue or impressions served to bots, or non-human traffic.
  • Non-viewable ads: These are legitimate ads that are not viewable by the user.
  • Low-Quality inventory: Refers to ads served on pages whose primary purpose is to house six, eight or more than 10 ads.
  • Insufficient frequency: Too few ads served per user – one or two – to create desired awareness.
  • Excessive frequency: Too many ads served to individual users – more than 100, 500 or more RTB impressions over 30 days
  • Redundant reach: Multiple vendors target the same users. This is often a consequence of vendors using the same retargeting or behavioral tactics to reach the same audiences.

 

Quantifying The Costs

The percentage of wasted impressions varies by campaign, but it’s usually quite significant. Here are some general ranges of wasted RTB impressions:

  • +/- 20% of exchange-traded inventory is deemed fraudulent, according to the Integral Ad Science Semi-Annual Review 2013[TH1] .
  • +/- 7% of viewable inventory is served on ad farm pages (more than six ads)
  • +/- 60% of legitimate inventory is not viewable per IAB standard
  • 10 to 40% of Imps are served to users with frequency that is too low to influence their behavior
  • 5 to 30% of Imps are served to users with frequency greater than 100 over the previous 30 days (the more vendors, the higher the waste due to redundant reach and excessive retargeting)

To put this in the context of a typical campaign, assume 100 million RTB Impressions are served in a given month.

RTB waste infographic crop

 

In most cases, less than 20% of RTB impressions are viewable by humans on legitimate sites with appropriate frequency. In other words, 20% of all Impressions drive 99% of the results from programmatic buying.  Because RTB impressions are so inexpensive, it’s still a very cost-effective channel.  That said, there is considerable room for improvement within RTB buying.

Who’s To Blame?

When we present these analyses to clients, the first question often asked is, “Who’s to blame?” Unfortunately, there is no single culprit behind the RTB inventory problem. As mentioned, the problem is due largely to a lack of controls and perverse incentives.

  • Lack of Controls: While a growing number of brands and agencies are incorporating viewability and using algorithmic analytical tools, most are still in the dark ages. Some feel their results are “good enough” and choose not to dig deeper. Others seem not to care. Hopefully this will change.
  • Perverse incentives: We live in a CPM world where everyone in the RTB value chain – save the advertiser) – profits from wasted spending. It’s not just the DSPs, exchanges and ad networks that benefit; traditional publishers now extend their inventory through RTB and unknowingly contribute to the problems mentioned above. While steps are being taken to address these issues, we’re not going to see dramatic improvement until the status quo is challenged.

 

How To Fix The Problem

The good news is that the RTB inventory problems are solvable. Some tactical fixes include:

  • We should invest in viewability, fraud detection and prevention, and algorithmic attribution solutions. While not expensive, they do require a modest investment of time, energy and budget. But when you consider the cost of doing nothing – and wasting 50 to 80% of spending – the business case for investing is very compelling.
  • We need to stop using multiple trading desks and RTB ad networks on a single campaign, or they’ll end up competing against each other for the same impressions. This will reduce the redundant reach and excessive frequency while keeping a lid on CPMs. It will also make it easier to pinpoint problems when they occur.
  • Finally, we need to analyze frequency distribution each month. Average frequency is a bad metric as it can mask a lot of waste. If 100 users are served only one ad, and one user is served 500 ads, the average frequency is six but 99% of those impressions are wasted. Look at the distribution of ads by frequency tier to see where waste is occurring.

For strategic change to occur, brands and their agencies must lead the way. In this case, “leading” means implementing controls and making their vendors accountable for quality and performance of display media.

  • Brands must demand more accountability from their agencies. They also need to equip them with the tools and solutions to address the underlying problems.
  • Agencies must demand better controls and make-goods from media vendors. Until we have better controls for preventing fraud and improving the quality of reach and frequency, media vendors need to stand behind their product, enforce frequency caps and make internal investments to improve the quality and efficiency of their inventory.
  • All buyers must make their DSPs and exchanges accountable for implementing more comprehensive solutions to address the fraud and frequency problems.

 

The Opportunity

We can’t expect a utopian world where no ads are wasted, but we can and should make dramatic improvements. By reducing waste, advertisers will see even greater returns from display media. Higher returns translate into larger media budget allocations, and that will benefit us all.

While fixing the problems may dampen near-term RTB growth prospects, it will serve everyone in the long run. Removing waste and improving quality of media will help avoid a bubble while contributing to the sustainable growth of the digital media industry.  Given the growing momentum in the public and private equity markets, I hope we as an industry take action sooner rather than later.

As always, comments are welcome.

Steve Latham
@stevelatham


Encore Twitter
Contact-us3c


 

Algorithmic Attribution SES Chicago

November 7th, 2013

Screen Shot 2013-11-07 at 11.29.01 AM At SES Chicago I introduced Algorithmic Attribution and discussed the implications for search marketers.  Please feel free to download and let me know if you have any questions!

Download pdf:  Algorithmic Attribution SESChicago2013

Steve Latham
@stevelatham

 

Demystifying Attribution: Which Approach is Best?

June 23rd, 2012

As published by Adotas 6/22/12 (view article)

Digital media attribution is a hot topic, but it’s still a confusing concept for many. Given a lack of industry standards, and an ever-expanding list of attribution methodologies, it can be difficult for marketers to determine exactly what they need. This post intends to educate and enable marketers to optimize digital spend through advanced insights.

The Funnel and Attribution

Simply defined, attribution is allocating credit to each interaction that drives a desired action (visit, goal page view, conversion, etc.). Within this broad definition, there are two primary distinctions:

• Lower-funnel or click-based attribution incorporates assist clicks when allocating credit for conversions. Compared to last-click reporting, this is a step in the right direction. The limitation to lower-funnel attribution is that it severely discounts the role of display advertising while overstating the role of search, affiliate, email and other click-centric media. For online advertisers seeking a more complete picture, a full-funnel view is required.

• Full-funnel attribution builds on click-based attribution by incorporating assist impressions from display ads (video, rich media, flash and .gifs) when allocating credit for visits or conversions. Recognizing that display ads can be very effective, even in the absence of clicks, a full-Funnel attribution model is needed to quantify the true impact display ads have in creating awareness, consideration and preference.

Cross-Channel Attribution

Cross-channel attribution addresses the role of each digital channel (display, paid search, natural search, email, affiliate, etc.) plays in the customer engagement process. While conversion paths are interesting, they aren’t very actionable. To truly understand and optimize each channel, you must allocate fractional credit to each channel and placement that contribute to a measurable action. This generally results in a shifting of credit from non-paid channels (organic search, direct navigation, referring sites), back to paid media (display, paid search, email, etc.) that “fed” the non-paid channels. Before diving into weighting methodologies, let’s first look at leading approaches to attribution.

Three Approaches to Attribution

While there are many approaches to attribution, here are three you should be familiar with:

• Statistical attribution is based on traditional media-mix modeling, which relates to analysis of disparate data sets. In this approach, you would analyze three months of impression data and three months of conversion data and look for relationships between the data sets. At best, this approach can provide high-level directional signals. If you want granular insights into the impact of each channel, vendor, placement or keyword, you need a more granular approach.

• A/B testing seeks to attribute credit and validate causation by observing results from pre-defined combinations of media placements. A/B testing can be used to measure display’s impact on results from search, as well as the performance of a specific creative, vendor, market or channel. While A/B testing is a great way to observe directional insights, it’s nearly impossible to exclude or account for other factors (seasonality, competitors, macro-economics, weather, etc.) that might impact performance between the control and test groups.

Operational attribution takes a bottom-up (visitor-based) approach to analyzing and allocating credit to impressions, clicks and visits that precede each conversion. With operational attribution, there is no need to calculate possible conversion paths – you have the actual data, which provides for a more granular and accurate data set for advanced analysis. (i.e. heuristic and/or statistical modeling). As with any approach, caution must be exercised to define the appropriate look-back window and cleanse the data set and exclude factors (e.g. wasted impressions and post-conversion visits) that might skew the results.

Manual vs. Statistical Weightings

With the framework of operational attribution, weighing assist impressions and clicks is both art and science. Here are two primary approaches:

• Subjective weighting of impressions and clicks: First-generation platforms require the marketer to define the rules for allocating credit to each interaction. While this approach is easy and flexible, it lacks statistical rigor and entails too much guesswork. Moreover, it allows the operator to influence outcomes through the assumptions that drive the model. But the biggest problem is that it allocates credit based on the actual number of assist impressions, rather than using observable data to model how many are actually needed. For example, you may find that 12 impressions preceded an average conversion, when only six impressions were actually needed. If you give credit for wasted impressions, you end up rewarding vendors for over-serving customers.

To reduce subjectivity and improve accuracy in how impressions and clicks are weighted, we must use machine learning and algorithmic modeling.

• Algorithmic weighting: To remove the guesswork in attribution can use machine learning and proven algorithms to calculate probability-based weightings for assist impressions and clicks. This removes the arbitrary nature of manual weightings and provides much higher levels of confidence and comfort for marketers.

There are numerous approaches to statistical modeling and there are plenty of vendors vying for the “best math” award. While it’s hard to say which approach is best, we have seen that most prefer transparency vs. opacity, and known algorithms to proprietary models. If you’re going to put your neck on the line to defend a new measurement standard, you should be comfortable with the approach and underlying assumptions. In general, a transparent, statistically validated approach is best.

Understanding Tradeoffs

There is an endless number of attributes you can seek to measure, including segment, format, audience demographics, creative concept, message, frequency, day-part, sequence, etc.  While the idea of Metric Nirvana is appealing, it’s also very elusive if you try to get there overnight.

It’s important to recognize that the more granular you get, the more data you need and the more complex the setup, production, reporting and analysis. Many attempts to go directly from last-click to advanced micro-attribution fail due to the complexity of implementation and analysis.  We all crawled before we walked, and walked before we ran. Analytics should be no different.

If you’re still using last-click metrics, start with channel-level and publisher-level attribution. Once you’ve identified your top performing vendors, delve deeper sequentially (as opposed to all at once) by looking at recency, format, creative and other variables that may have incremental impact on the end results. Just remember that with each incremental value, the scale, complexity, and risk of failure increase dramatically. So start with the low-hanging fruit, and work your way up the tree.

To sum all this up, we are Encore and MediaMind advocate the following best practices:

Operational, full-funnel and cross-channel attribution.
Use machine learning to weigh and allocate fractional credit to assist impressions and clicks.
Improve your chances of success by starting with basics and getting more granular over time.

Hopefully you now have a better understanding of the attribution landscape and some of the distinctions within it.  As always, feel free to comment, tweet, like, post, share, or whatever it is you do in your own social sphere.  Thanks for stopping by!

@stevelatham

Encore TwitterContact-us3c