Author Archive

Omni.Digital Attribution Recap

September 17th, 2015

Screen Shot 2015-09-17 at 11.54.00 AM

Last week I had the pleasure of speaking on Attribution at AdExchanger’s Omni.Digital conference in Chicago. Our panel “The Next Wave of Attribution Vendors” was moderated by AdExchanger’s Lead Research Analyst Joanna O’Connell. As usual, there wasn’t time to fully answer all of the questions, so here is a recap in Q&A format.

What is Attribution and what’s it good for?

While most agree on how to define “multi-touch attribution” (attributing fractional credit to the interactions that result in a conversion), each set of stakeholders often use it for different reasons. For example:

  • Analysts view it as a means to delivering more accurate reports to the media team.
  • Media buyers often use it to validate performance of their media buys.
  • Advertisers often use it to confirm that their media budgets are being properly invested.

While each use case is useful, it is also limited. Fractional attribution by itself is not an end, but rather a means to learning and optimizing. Through statistically validated insights, brands and agencies glean a much better view into which media partners, strategies, formats and creatives work (and which do not). They can also make more intelligent decisions for cutting waste and re-allocating budgets. If marketers want to get the full value from Attribution, they need to act on the insights. If they don’t, they are leaving money on the table.

What is wrong with Attribution solutions, and where is disruption needed?

This answer has several parts.

First, we can all agree that last-click attribution is flawed approach. Whether desktop or mobile, last-click rewards the lowest-funnel media and penalizes everything above it.

Second, static multi-touch models (e.g. even weighted, U-shaped, time-decay) are better than last click, but only marginally. These still reward vendors who over-serve likely converters and perpetuate the epidemic of Retargeting Gone Wild.

Third, the first wave of advanced (algorithmic) attribution solutions weren’t viable for most advertisers: complex and lengthy implementations, continued reliance on services and a high price tag that only the largest advertisers can justify.

The “new wave“ of vendors recognize advertisers need fast, easy, and affordable technology-based solutions that leverage readily-available data. Extensibility and automation obviate the need for complex integrations and labor-intensive analysis while reducing the time lag between implementation, production and insight. Through rapid onboarding, automated processing and timely reporting, the value proposition is fundamentally changing. Validated insights, recommendations and forecasting, delivered quickly, efficiently and affordably… these are the new table stakes in the Attribution space.

How big a problem is mobile in the world of attribution?

When you consider that attribution is based on conversion path modeling, the lack of user-level mobile data makes analysis very challenging. To assemble mobile conversion paths, you either need a cookie-less ad server or a partner, such as a DMP or mobile conversion vendor, to aggregate publisher data for each device. There are workarounds (i.e. manually aggregating data from publishers and DSPs) but it’s a lot of work. See The Dark Side of Mobile for more on this topic.

What about Google and Facebook?

Google, Facebook and now Verizon aren’t making life easier for advertisers seeking independent validation and advanced insights. Walled gardens can be scaled with a running start. As these publishers consolidate the market, their walls are looking more like the Wall of the North that was built to defend us against the Wildlings. Those of us on the side of openness and transparency are hopeful brands and their agencies will vote for with their budgets to reverse the trend towards data protectionism.

What progress are marketers making in adopting attribution?

The adoption curve is steepening but it’s still early. Surprisingly (or not), the majority of advertisers still rely on last-touch (click or impression) to reward conversions. Some of the pioneers are still nursing their wounds (especially those who tried to run before learning to walk) and a majority of the settlers are waiting for assurance that the path is safe before proceeding. It’s taken longer than anticipated, but progress is happening.

Among those that are leveraging attribution, most are still picking low-hanging fruit with a focus on desktop media spend. Very few have figured out mobile and even fewer are connecting the dots to gain a multi-platform view of users. While there are solutions available, few brands or agencies have the resources needed to take advantage of these new opportunities. I expect this will change in 2016.

Are advertisers using attribution outputs to plan media mix?

Savvy agencies and brands are acting on the insights, but too many just use Attribution to validate that their media is working. As most media spend is still done through IO’s, media buyers must take action to optimize spend, whether it’s pausing underperforming campaigns, re-allocating budgets to top performers or addressing frequency issues with their vendors.

While all agencies claim to be active in their approach to campaign management, too often they tend to “set it and forget it.” We have some great clients who actively review results and take action to improve performance.  But there is also a subset of agencies who are either too busy or lack the resources and/or commitment to capitalize on the insights.  As Brands become more involved in the process, I expect these agencies to become better stewards of their clients’ budgets.

Are they pushing the output of attribution into media buying systems?

While modeled outputs can be sent to a DSP as inputs for buying orders, the idea of a self-optimizing, closed-loop is still a bit futuristic.  First, it can only be done for programmatic buying, which is only one of many vendors on a given media plan (IO driven media still dominates).  Second, it will require close oversight as there are numerous factors that can produce false positives or negatives on a real-time basis.  A few examples that can wreak havoc are: accidental removal (or duplication) of a conversion tag, a glitch in how a confirmation page is served, or a hiccup in the ad server or disruption in the delivery of log files. These events happen often, so if you’re going to send buying signals in real-time to your DSP, you’ll need some guard rails.

In a more practical sense, attribution-based insights are used to compare the accuracy and effectiveness of operational (post-click and post-view) KPIs which are often relied on for daily buying decisions. In some cases we’ve seen these KPIs are sufficient for real-time decisioning. But in many cases they are subject to being gamed by vendors (cookie bombers) who are inadvertently rewarded while quality placements are penalized.

The bottom line is the industry is making progress, but we’re still a long way from Nirvana (self-optimizing systems that use modeled attribution KPIs to guide real-time decisions).

What advice would you give marketers? 

There are two important components to success in measuring and optimizing media: the System and the People.

On the People (behavioral) side of the equation:

  1. Get aligned. Many brands still have silos that make cross-channel initiatives challenging. Internal stakeholders need to agree on the end goal (omni-channel proficiency), which requires integrated planning and measurement.
  2. Delegate, but don’t abdicate. If brands choose to delegate measurement to their agency, they need to be active participants in the process. One way is through monthly meetings to review results (beyond the top line). Review the latest results (vendors, strategies, formats), discuss the lessons learned and define changes to make. Trust but validate, and keep your finger on the pulse of the campaign.
  3. Do something! Don’t let the absence of a perfect solution prevent you from moving forward (remember perfect is the enemy of good). Set expectations that will be easy to meet. Each discovery will surface many new questions, as well as insights.
  4. Rationalize Incentives. Unfortunately, advertiser objectives (maximum efficiency) are not always aligned w/ those of their Agencies and Publishers (maximum spend). Recognizing there is waste in every campaign, incentivize your agency to identify underperforming spend, re-allocate what they can and use the remainder to test and learn. Provide your agency with incentives to optimize efficiency, even if it means spending less in the aggregate (e.g. give them a bonus for saving you money).

On the System (technology and data) side, consider the following:

  1. Focus on your key needs: determine what objectives you’re seeking to achieve, and the questions you’re trying to answer. Then ask vendors how they can help you achieve your specific goals. Ask “how can you help me _____?” rather than “what do you do?”
  2. Leverage your existing infrastructure: If you have an ad server and/or a DMP, you should be able to receive a unified data set (impressions, clicks, visits and conversions per user) for attribution modeling. Tagging every ad is no longer viable (too much effort, latency and data loss) or necessary. Rather than re-invent the wheel, seek to use data that already exists.
  3. Focus on ROI. A good attribution platform should yield $20+ in savings and $50+ in revenue for every $1 invested. Put in this light, you can’t afford not to invest in insights that can drive dramatic improvement in efficiency.  And while it used to be that only the largest brands could afford algorithmic attribution, it’s much more affordable today with solutions starting in the low 4 figures per month.
  4. Learn to use it. While attribution has become much more intuitive and user-friendly, advertisers need to invest some time upfront to learn the new KPIs, reconcile them against older metrics, and teach the organization how to use them.
  5. Crawl > Walk > Run.  Start with desktop and online conversions, then connect offline conversions.  Once you’ve picked the low hanging fruit, add A/B testing to validate causality. Once you’ve mastered desktop, tackle mobile media (by then there should be more options for obtaining conversion path data).  Once you figure out Desktop and Mobile, then add device bridging to get true cross-platform, omni-channel insights.  Remember you have to walk before you run.  If you set reasonable goals and manage expectations, the probability of success will be significantly higher than if you try to do it all at once.

I hope you found this informative and thought-provoking. As always your comments and questions are welcome!

Steve Latham | @stevelatham

Encore TwitterContact-us3c

The Dark Side of Mobile Attribution

August 14th, 2015

Repost of my Data Driven Thinking byline published by AdExchanger August 2015.

The good news: Mobile will be the freight train that drives the media industry.

The bad news: The lack of data availability and transparency will cost marketers billions of dollars.

mobile ad spendSince the iPhone’s 2007 introduction, the media industry has deemed every year to be “The year of mobile.” It took longer than expected to mature, but desktop’s awkward little brother is about to dwarf big bro and steal his girlfriend along the way. Mobile surpassed desktop in consumption in 2014 and will surpass it in spending in 2016. eMarketer predicts mobile media will reach $65 billion by 2019, or 72% of digital spending.

As we move towards a “mobile-first” world, we need to address a very big problem: We still can’t accurately measure performance. The ability to target customers in new and innovative ways outpaces the ability to measure effectiveness of those tactics.

Mobile’s Measurement Problem

The digital media ecosystem was built on cookies to target, track and measure performance. Cookies are imperfect but good enough to develop accurate insights into customers’ journeys. Using cookie data to assemble and model conversion paths, marketers can use fractional or multi-touch attribution to optimize media campaigns much more effectively than with last-click metrics.

In mobile, third-party cookies are blocked on most devices and privacy regulations limit device tracking. Consequently, traditional ad servers are limited to reporting on last-click conversions where possible.

For brands seeking to drive app installs, mobile attribution companies like Kochava, Tune, Appsflyer and Apsalar can track the click that led to the download in Apple or Google stores. Some are working on post-click and post-view reports, but these will be of limited help to advertisers seeking actionable insights.

last-user-sessionThe lack of mobile data means advertisers cannot quantify reach and frequency across publishers. They also cannot measure performance across publishers via multi-touch attribution. The cost and complexity of device bridging further obfuscates user-level engagement.

Rays Of Light

Mobile data and measurement challenges won’t be solved overnight, but a convergence of factors point to a less opaque future. Here are my predictions:


1. Ad servers will adapt to device IDs

Conceptually, a device ID is not unlike a cookie ID, privacy issues notwithstanding, but it takes time and money to introduce a cookie-less ID system. Following the lead of Medialets, traditional ad servers will introduce their own anonymous IDs, instead of cookies, that map to probabilistic and deterministic device IDs. Like cookies, these IDs will allow them to log user level data that can feed fractional attribution models. We’ll probably see some early announcements before the end of year, with more to come in 2016.

2. Data unification will become readily available

To date, demand-side platforms, data management platforms, tag managers and data connectors have fixated on using data to help advertisers target, retarget, cross-sell and remarket. The same data that is used to drive revenue can also be used to connect user-level data for measurement purposes. Companies, such as Liveramp, Signal, Exelate and Mediamath, are already unifying data for analysis. More will follow.

3. Device bridging will become ubiquitous

To date, connecting devices across publishers has been a luxury afforded by the largest advertisers. In time that will change as wireless carriers, and possibly some publishers, offer device graphs exclusive of media and standalone vendors, such as Tapad and Crosswise, will reach economies of scale. At the same time, ad servers and data connectors will build or license device graphs and offer bridging as an extension of their service.

As ad delivery, data management and device bridging become more integrated (e.g. see announcement by Tapad and Medialets), costs will come down and advertisers of all sizes will be able to measure engagement across devices.

4. Mobile attribution vendors will be forced to evolve

As ad servers and data connectors incorporate device-level conversions in their data sets, including app installs, mobile attribution companies will have to expand their offerings or risk becoming redundant. Some may stick to their knitting and delve deeper into mobile analytics and data management. Others may pivot towards media and expand into desktop or addressable TV. Others may just be acquired. Regardless, it’s unlikely this category will remain as-is for much longer.

5. Last-touch attribution may finally go away.

We’ve been predicting the end of the click as a key performance indicator for years. But inertia, apathy and a continuous stream of shiny objects have allowed last-touch metrics to survive while brands and agencies fought other battles.

Now that we’ve tackled video, programmatic, social, native, viewability, fraud and HTML5, the new focus on insights and big data may finally drive the roaches away. The click will be hard to kill, but as we become smarter about measurement, it will become much less visible.

As the mobile data gaps are filled, the promise of cross-platform, cross-device, cross-channel attribution can become a practical reality for advertisers of all sizes.  From a measurement perspective, our best days are still ahead.  But as mentioned in the headline, getting there is going to be quite costly.

Steve Latham

The Problem With Attribution

July 17th, 2015

Repost of my Data Driven Thinking byline published by AdExchanger

In recent months we’ve heard some noise about the problems with using multi-touch attribution to measure and optimize ad spend (see articles in Adexchanger and Digiday).  Some claim attribution is flawed due to the presence of non-viewable ads in user conversion paths. Others say attribution does not prove causality and should therefore be disregarded.

My view is that these naysayers are either painting with too big of a brush or they’re missing the canvas altogether.

Put The Big Brush Away 

broad-brushThe universe of attribution vendors, tools and approaches is large and diverse. You can’t take a broad-brushed approach to describe what they do.

If the critics are referring to static attribution models offered by ad servers and site analytics platforms, such as last touch, first touch, U-shaped, time-based and even weighting, I would agree that these are flawed because of the presence of non-viewable ads. Including every impression and click and arbitrarily allocating credit will do more harm than good. But if they’re referring to legitimate, algorithmic attribution solutions, they clearly don’t understand how things work.

First, not all attribution tools include every impression when modeling conversion paths. Occasionally, non-viewable impressions can be excluded from the data set via outputs from the ad server or a third-party viewability vendor. For the majority of cases where impression-level viewability is not available, there are proven approaches to excluding and/or discounting the vast majority of non-viewable ads. Non-viewable ads and viewable, low-quality ads almost always have a very high frequency among converters, serving 50, 100 or more impressions to retargeted users. By excluding the frequency outliers from the data set, you eliminate a very high percentage of non-viewable ads. You also exclude most viewable ads of suspect quality.

Second, unlike static models, machine-learning models are designed to reward ads that contribute and discount ads that are in the path but are not influencing outcomes. As cookie bombing is not very efficient, with lots of wasted impressions of questionable value, they are typically devalued by good algorithmic attribution models.

By excluding frequency outliers and using machine-learning models to allocate fractional credit, attribution can separate much of the signal from the noise, even the noise you can’t see. And while algorithmic attribution does not necessarily prove causality, a causal inference can be achieved by adding a control group. While not perfect, it’s more than sufficient for helping advertisers optimize spend.

You Missed The Entire Canvas

paint-on-childrenComplaining that attribution models are not accurate enough is like chiding Monet for being less precise than Picasso, especially when many advertisers are still painting with their fingers.

It’s easy to split hairs and poke holes in attribution, viewability, brand safety, fraud prevention, device bridging, data unification and other essential ad-tech solutions. But the absence of a bulletproof solution is not a valid reason to continue relying on last century’s metrics, such as click-through rates and converting clicks.

As Voltaire, Confucius and Aristotle said in their own ways, “Perfect is the enemy of good.”
Ironically, so is click-based attribution.

While no one claims to have all the answers with 100% accuracy, fractional attribution modeling can improve media performance over last-click and static models. And while not every advertiser can be the next Van Gogh, they can use the tools and data that exist today to get a solid “A” in art class.

The Picture We Should Be Painting
I’m a big fan of viewability tools and causality studies, and I’m an advocate for incorporating both into attribution models. I am not a fan of throwing stones based on inaccurate or theoretical arguments.
Every campaign should use tools to identify fraud, non-viewable ads and suspect placements. The outputs from these tools should be inputs to attribution models, and every advertiser should carve out a small budget for testing. While this is an idealistic picture, it may not be too far away. As the industry matures, capabilities are integrated and advertisers, including agencies and brands, learn to use the tools, we will get closer to marketing Nirvana.

In the mean time, advertisers should continue to make gradual improvement in how they serve, measure and optimize media. Even if it’s not perfect, every step counts.

puzzle-paintingAd-tech companies should remember we’re all part of an interdependent ecosystem. We need to work together to help advertisers get more from their media budgets. And we all need to have realistic expectations. From a measurement perspective, the industry will always be in catch-up mode, trying to validate the shiny new objects being created by media companies.

All that said, we can do much more today than only one year ago. We’ll continue to make progress. Advertisers will be more successful. And that will be good for everyone.

Steve Latham

Shedding Light Beneath the Attribution Canopy

May 22nd, 2015

adexchanger_logoAdexchanger recently published a timely article “Breaking through the Attribution Canopy” on the Attribution marketplace (view it on Encore’s facebook page). Overall they did a good job of highlighting the conflicts of interest that are inherent when your media vendor is also your trusted source of insights.  They also touched on the emergence of new solutions that are designed to address the needs of the larger market.   Along with other industry executives, I was quoted in the interview.

During the interview, we discussed a lot of issues surrounding media attribution and optimization.  But as with any interview, only a few of my comments were published.  To provide some context and clarify our POV, here are the key takeaways:

  • We are glad to see that Attribution has (finally) reached a tipping point.  Brands, agencies, DSPs and media platforms are scrambling to leverage machine-based insights to optimize media spend.  Continuing to rely on last-touch KPIs for is simply a lazy and irresponsible approach to measuring media.
  • We believe measurement, analysis and optimization decisions should be driven by the advertiser, its agency or an independent solution provider, not its media vendor.  Even if the fox is friendly, it shouldn’t be in the hen house.
  • We also believe data should be easily ported, integrated and made available for analysis, regardless of who sells the media or who serves the ads.  Openness, transparency and portability are not only ideological values; they also make business sense.
  • The growing concentration of power of leading media and technology vendors should be on everyone’s radar as a threat to transparency and openness.  If you look at the markets for programmatic display, video advertising, search, social marketing, mobile advertising* and ad serving, the dominant players are making it difficult and expensive to independently analyze their data in the context of other media. The path to marketing and advertising success does not end in a walled garden.
  •  To date, advanced insights (e.g. algorithmic attribution and data-driven optimization tools) have been reserved for the largest advertisers who can afford six-figure price tags.  As the article points out, there is a large unmet need beyond the top 200 advertisers.  To address the needs of the thousands of middle market advertisers, a new model (no pun intended) is needed.  Heavy, expensive and service-intensive solutions cannot scale across the broader market.  The next phase of adoption will be won by light and agile solutions that are affordable and easy to implement.
  • To deliver modeled insights at scale, the solution must be automated, efficient, flexible and customizable for each advertiser.  It should also be affordable.  On this point, we wholeheartedly agree with Forrester’s Tina Moffett “I think one advantage [attribution start-ups] do have is they were able to see the market needs and where the gaps were … and where existing players were falling short.”

For these reasons, we are very excited about the prospects for innovators who are able to address unmet needs for the large and growing middle market.

*For more on my quote that Google gets half of all mobile ad dollars, please see the emarketer report published earlier this year.

As always, thanks for reading and feel free to share comments or contact me if you have any questions.

Steve Latham

The Value of Data: Our POV on Verizon-AOL

May 19th, 2015

I was recently interviewed by Advertising Age on the data angle of the Verizon-AOL deal (read the AdAge article).  While still fresh in my mind, I thought I’d share our POV.

First, Verizon already has unprecedented insight into what people are doing:

  • They know which device is speaking to its network and the packets and requests to each device (i.e. they capture all data sent and received)
  • They record every user session (e.g. using an app, typing an email or browsing the web)
  • Whether you’re using Verizon’s wireless service or local wifi to access the Internet, all data is captured
  • They track online behavior via cookies and relationships with 3rd parties (e.g. AOL owned Huffington Post)
  • They connect devices to each other and to desktops and households better than anyone else.
  • We believe Verizon already collects data than other providers

Acquiring AOL gives Verizon the ability to analyze the data and use it for advanced targeting of digital media.  While AOL’s sites (e.g. HuffPo) have some value, the real value is selling the ability to do advanced audience targeting to advertisers through AOL’s programmatic buying platforms for advertisers and publishers.  In short, Verizon has the diamond mines and AOL provides the mining equipment, sales and distribution.

And Verizon’s diamonds will have superior cut and clarity than what today’s competitors can offer as it can provide deeper insight to customer behavior across platforms and devices.  While competitors are trying to stitch together the pieces from the outside in, Verizon has already bolted them together from the inside out.

Not to say that AOL’s data isn’t valuable too.  In recent years AOL has done a great job of developing a very large proprietary data platform.  The Verizon deal will enhance AOL’s data in numerous ways:

  • Expand the reach of users
  • Expand the data on each user: demo, geo, behavioral, etc.
  • Enable better multi-platform / device bridging
  • Improve resolution and accuracy

So at the end of the day, this deal is about mining all that data and converting it into revenue.  As noted in the AdAge article, they will need to do this very carefully and responsibly. Verizon has a spotty record among privacy advocates so it would be smart to proceed with caution.

Thanks for your time and interest.  I look forward to your comments.

Steve Latham

fbEncore TwitterContact-us3c

Investing Confidently (and Safely) in Programmatic

March 28th, 2015

sparkle chartOver the past few years, we’ve spent a lot of time advising Brands and Agencies on the challenges and risks associated with Programmatic buying (which for this post will encompass exchange traded media, RTB, etc.).  While the idea of machine-based buying is exciting, it’s not without significant challenges and risks.  Having analyzed dozens of programmatic campaigns, we’ve found that a blind leap into Programmatic is almost always a costly endeavor.  The thesis for taking a smart approach to programmatic buying is summarized below:

  • While the promise of self-optimizing buying is intriguing, it doesn’t replace the need for objective, rational analysis.
  • Programmatic optimization is typically based on a broken model.  The continued reliance on clicks, post-click and post-view metrics may do more harm than good.
  • Algorithmic attribution is critical for measuring and optimizing media.  Fractional, statistical analysis is needed for accurate and impactful cross-channel, full-funnel insights.
  • As brands shift more of their budgets to programmatic, the need for objective, attribution-based insights will become even more critical

I recently put documented some of the key lessons learned to produce the embedded Presentation: “Investing Confidently in Programmatic“.  I thought about calling it “How to Avoid Wasting Half of Your Media Budget” but opted for the more positive spin. Either would be sufficiently accurate.

In it, I address some of the risks and challenges of Programmatic buying, along with recommendations for ensuring a successful investment in this rapidly changing arena.  Also included is a SWOT analysis to frame the strengths, weaknesses, opportunities and threats that advertisers must deal with to be successful in this new area of machine-based buying.

As always, your comments and questions are welcome – just post!  If you’d like a copy of this presentation please contact us.

Steve Latham

Encore TwitterContact-us3cfb

The Growing Need for Device Bridging

December 1st, 2014

tapad joined

As an industry, we are quickly moving to a “mobile first” world where mobile engagement is becoming an increasingly important part of the customer journey in most considered purchases. From a targeting standpoint, digital publishers have done a decent job of assembling the components to engage individuals across desktops, laptops, mobile phones and tablets. But on the measurement side of the spectrum, marketers are way behind the curve.

Defining the Problem
Traditional platforms and measurement tools use cookies or alternative IDs to track the behavior of each browser as an individual user. As consumers are increasingly using multiple devices as part of their everyday lives, single-screen tracking provides a very limited view of user-level engagement. Even within your mobile device, it’s difficult to connect your mobile browser and applications, meaning most publishers and advertisers won’t know you’re the same person who saw an in-app ad before navigating to their site through a search on your mobile browser. When you factor in the use of tablets, desktops and laptops, the challenge becomes even more complex. Let’s use the following example to illustrate the challenge:

Suppose you see an ad for TravelX (fictitious travel site) on your iPhone’s mobile browser and you remember you need to book a plane ticket home for the holidays. So you go to the App store and download the TravelX app and book a flight home. Unfortunately, TravelX will not know that the mobile ad you saw on Safari drove the purchase you just made on their mobile App. To them, you appear to be two different users.

In your confirmation email is an ad for a great rate on a rental car. You see it on your phone and make a mental note to do some research tomorrow. While at work the following day you visit the Brand X travel site on your laptop to check out rates. You’re not ready to buy but you’ve definitely shown interest and are deep in the funnel. But unless you sign-in using the same ID as your mobile app, Brand X will not know you are the same person who just booked a flight through their app. Again, they will classify you as a unique user.

To sum up, you responded to TravelX’s ad, downloaded their app, booked a flight, and are now considering a rental car.  While Google, Facebook and others may help them retarget you on your laptop and mobile device, TravelX won’t know how to accurately attribute credit for the conversions.  Given the gaps in “traditional” digital measurement, TravelX doesn’t know its integrated media plan is working so well.  They are still wondering what caused you to download their app in the first place while questioning the value of the seemingly ineffective mobile browser ad that got your attention, but not your click.

tapad uniques

This example illustrates the challenges marketers face in creating a unified view of each customer (even on the sites you frequent).  If you’re like the majority of advertisers who do not require users to authenticate, the challenge of measuring the source of new conversions is even harder.

Beyond the bridging problem, we also see a lot of advertisers who rely on their publishers to serve mobile ads and report the results of the campaign.  As with site-served desktop ads, this leaves a lot of unanswered questions about the true reach, frequency and timing of ads being served and/or rendered.

Solving the Device Bridging Problem

In recent years some innovative companies have emerged to address the gaps in device bridging and data unification.  A few of them are even independent of your media buy, which is very important (let’s keep the foxes out of the hen house). While no one solution offers the silver bullet to address this problem, point solutions are available to:

  • Serve ads to mobile browsers, using a device ID vs. cookies to track user engagement
  • Bridge mobile browsers to in-app engagement
  • Bridge mobile devices to other devices, desktops and laptops used by the same individual
  • Connect device IDs to a universal user ID that can be used for online and offline CRM efforts

tapad connectedBy bridging device-specific data at the user level, advertisers can connect all impressions, visits and conversions associated with each customer journey, regardless of platform or device.  For those brands going head-first into mobile, this will be required to truly understand reach, frequency and cross-platform synergies for each campaign.

Armed with these insights, marketers can plan more effectively, reduce waste and optimize spend while better understanding how and when consumers engage with their brand.  Without these insights, there will be some lingering questions about the relative contribution, impact and ROI from each digital media buy.

(Images courtesy of Tapad)

As always, comments are welcome!

Steve Latham

Encore Twitter

Encore Launches New Attribution Platform

November 13th, 2014

Encore’s Customer-Driven Solution Makes Algorithmic Optimization Easy and Affordable 

New York City —- Encore Media Metrics, a pioneer in digital attribution, today announced the launch of its new cloud-based analytics platform. With the introduction, Encore is making advanced measurement and impactful optimization easy, efficient and affordable for brands and agencies of all sizes.

According to Tina Moffett of Forrester, “Marketers need guidance on development, management and analysis of complex marketing efforts. Attribution vendors need to deliver what customers are looking for: turning insights into action, calculating accurate performance metrics, providing a holistic view of the customer purchase path, and taming the messiness of big data.”

Leveraging six years of research and innovation, Encore designed its platform to address these unmet customer needs, delivering clear and compelling insights that are readily available and easy to act on – without data overload. Beyond automated ingestion, algorithmic modeling and visualization of paid, earned and owned media, Encore’s new platform provides powerful insights, and prescriptive recommendations for reducing waste and optimizing spend. Encore’s platform enables customers to make real-time updates and modifications directly in the web-based User Interface, providing unique personalization and efficiency. Best of all, Encore’s solution is easy to implement, easy to use and priced to fit budgets of all sizes.

“Encore’s new platform delivers robust analysis, insights and recommendations marketers need to measure and optimize campaigns with confidence, said Gunnard Johnson, SVP Analytics, Centro. The new Interface delivers the insights that matter without overwhelming us with data. They’ve done a great job of delivering statistically modeled outputs while giving us control and flexibility in how the insights are presented.  I believe it’s a smart and affordable solution that fills the gap in today’s marketplace.”

“We wanted to make advanced analytics easy and efficient for brands, agencies and partners, said Steve Latham, founder and CEO of Encore. “Therefore, we had to develop a scalable, machine-based solution that would provide clear and compelling insights, along with actionable recommendations for optimizing spend. We also had to simplify the user experience by reducing the complexity and level of effort typically required by a customer. Lastly, we had to make it affordable for the majority of market.”

Encore’s new cloud-based platform aggregates and analyzes conversion path data using proven models for attributing fractional credit to each interaction along the way. Programmatic reports, insights and recommendations for optimizing spend are presented in a customizable User Interface that allows each customer to personalize how insights are presented and acted on. A predictive analysis module forecasts future results and the expected lift in performance from optimizing on Encore’s insights.

“Encore’s platform delivers real savings and improved returns for our clients” said Kunick Kapadia, Channel Analytics Supervisor, The Gate Worldwide. “Analyzing the true impact of each media channel is an enormous undertaking, but Encore’s real-time dashboard unlocks these data points and turns them into actionable insights. They have more than paid for themselves through the savings and enhanced ROI as a result of our partnership. I would recommend Encore as the go-to solution for clients of all sizes.”

Learn more about Encore’s new Attribution Platform.

Observations on the Attribution Market

July 7th, 2014

The market for Attribution companies has definitely heated up with high profile acquisitions by Google and AOL.  I view these transactions as strong proof points that brands and their agencies are starving for advanced data-driven insights to optimize their investments in digital media.  The same thesis that led us to start Encore several years ago still holds true today: traditional metrics are no longer sufficient and advanced insights are needed to truly understand what works, what doesn’t, and how to improve ROI from marketing dollars.

Over the years we’ve analyzed more than 100 brand and agency campaigns of all sizes – from the largest CPG companies in the world, to emerging challengers in Retail, Automotive, Travel and B2B.  Based on these experiences, here are 5 observations that I’ll share today:

  1. We are still early in the adoption curve.   While many brands and agencies have invested in pilots and proofs of concept, enterprise-wide (or agency-wide) adoption of fractional attribution metrics is still relatively low, and the big growth curve is still ahead of us.  About 18 months ago I wrote about 2013 being the year Attribution Crosses the Chasm.  I now see I was a bit early in my prediction – 2014 is clearly the year Attribution grows up.
  2. There is still some confusion about who should “own” cross-channel / full-funnel attribution.  Historically brands have delegated media measurement to their agencies.  We now see brands taking on a more active role in deciding how big data is used to analyze and inform media buys.  And as the silos are falling, the measurement needs of the advertiser often transcend the purview of their media agency.  In my opinion, responsibility for measurement of Paid, Owned and Earned media will increasingly shift from the agencies to the brands they serve.  This is already the case for many CPG companies we serve.  In measuring media for more than a dozen big consumer brands, we’re seeing the in-house teams setting direction and strategy, while agencies play a supporting role in the measurement and optimization process.  We’re happy to work with either side; they just need to decide who owns the responsibility for insights.
  3. Multi-platform measurement is coming, but not as fast as you might think.  We are big believers in the need for device bridging and multi-platform measurement and are working with great companies like Tapad to address the unmet need of unifying data to have a more comprehensive view of customer engagement.  To date we’ve presented Device Bridging POVs to most of our customers.  And while are interested in this subject, very few will invest this year.  It’s not that the demand isn’t there – it will just take some time to mature.
  4. Marketers need objective and independent insights – now more than ever.  Despite increasing efforts by big media companies to bundle analytics with their media, the days of relying on a media vendor to tell you how well their ads performed are limited.  It’s fine to get their take of how they contributed to your business goals, but agencies and brands need objective 3rd party insights to validate the true impact of each media buy.  And with the growing reliance on exchange-traded media and machine-based decisioning, objective, expert analysis is needed more than ever to de-risk spend and improve ROI.   We’ve found this approach works well – especially in days like these where it’s all about sales.  This leads to my 4th observation…
  5. In the end it’s about Sales.  While digital KPIs are great for measuring online engagement, we’re seeing more and more interest in connecting digital engagement to offline sales.  Again, we’re fortunate to work with great partners like (m)PHASIZE to connect the dots and show the true impact of digital spend on offline sales.  We’re also working on opportunities with LiveRamp and Mastercard to achieve similar goals.  Like device bridging, I see this becoming more of a must-have in 2015, but it’s good to have the conversations today.

There is so much more to discuss and I’m sure our market will continue to iterate and evolve quickly.  But to sum it up, it’s an exciting time to be in the digital media measurement space. Attribution is finally coming of age and it’s going to be a hell of a ride for the next few years.

As always, comments are welcome!

Steve Latham

Encore Twitter



Inefficiencies of Exchange Traded Media

January 21st, 2014

Encore’s latest POV on Inefficiencies of Exchange Traded Media was published by AdExchanger on January 21, 2014.  You can read the article on AdExchanger or read the POV below.

While exchange-traded media is praised for bringing  efficiency to the display ad market, a deeper dive reveals numerous factors are costing advertisers billions of dollars in wasted  spend.  While programmatic buying is relatively efficient (compared to other media), on absolute basis, a lot of wasted spend generally goes unnoticed.

Research shows that perverse incentives, a lack of controls and limited use of advanced analytical tools have made a majority of exchange-traded media worthless.  While we advance how we buy and sell media, there is still significant room for improvement in the quality and economic returns from real-time bidding (RTB).


Where Waste Occurs

Optimizing media starts with eliminating wasted spending. In the RTB world, waste can take many forms:

  • burningmoneyFraud: Either 1x1s sold into exchanges to generate revenue or impressions served to bots, or non-human traffic.
  • Non-viewable ads: These are legitimate ads that are not viewable by the user.
  • Low-Quality inventory: Refers to ads served on pages whose primary purpose is to house six, eight or more than 10 ads.
  • Insufficient frequency: Too few ads served per user – one or two – to create desired awareness.
  • Excessive frequency: Too many ads served to individual users – more than 100, 500 or more RTB impressions over 30 days
  • Redundant reach: Multiple vendors target the same users. This is often a consequence of vendors using the same retargeting or behavioral tactics to reach the same audiences.


Quantifying The Costs

The percentage of wasted impressions varies by campaign, but it’s usually quite significant. Here are some general ranges of wasted RTB impressions:

  • +/- 20% of exchange-traded inventory is deemed fraudulent, according to the Integral Ad Science Semi-Annual Review 2013[TH1] .
  • +/- 7% of viewable inventory is served on ad farm pages (more than six ads)
  • +/- 60% of legitimate inventory is not viewable per IAB standard
  • 10 to 40% of Imps are served to users with frequency that is too low to influence their behavior
  • 5 to 30% of Imps are served to users with frequency greater than 100 over the previous 30 days (the more vendors, the higher the waste due to redundant reach and excessive retargeting)

To put this in the context of a typical campaign, assume 100 million RTB Impressions are served in a given month.

RTB waste infographic crop


In most cases, less than 20% of RTB impressions are viewable by humans on legitimate sites with appropriate frequency. In other words, 20% of all Impressions drive 99% of the results from programmatic buying.  Because RTB impressions are so inexpensive, it’s still a very cost-effective channel.  That said, there is considerable room for improvement within RTB buying.

Who’s To Blame?

When we present these analyses to clients, the first question often asked is, “Who’s to blame?” Unfortunately, there is no single culprit behind the RTB inventory problem. As mentioned, the problem is due largely to a lack of controls and perverse incentives.

  • Lack of Controls: While a growing number of brands and agencies are incorporating viewability and using algorithmic analytical tools, most are still in the dark ages. Some feel their results are “good enough” and choose not to dig deeper. Others seem not to care. Hopefully this will change.
  • Perverse incentives: We live in a CPM world where everyone in the RTB value chain – save the advertiser) – profits from wasted spending. It’s not just the DSPs, exchanges and ad networks that benefit; traditional publishers now extend their inventory through RTB and unknowingly contribute to the problems mentioned above. While steps are being taken to address these issues, we’re not going to see dramatic improvement until the status quo is challenged.


How To Fix The Problem

The good news is that the RTB inventory problems are solvable. Some tactical fixes include:

  • We should invest in viewability, fraud detection and prevention, and algorithmic attribution solutions. While not expensive, they do require a modest investment of time, energy and budget. But when you consider the cost of doing nothing – and wasting 50 to 80% of spending – the business case for investing is very compelling.
  • We need to stop using multiple trading desks and RTB ad networks on a single campaign, or they’ll end up competing against each other for the same impressions. This will reduce the redundant reach and excessive frequency while keeping a lid on CPMs. It will also make it easier to pinpoint problems when they occur.
  • Finally, we need to analyze frequency distribution each month. Average frequency is a bad metric as it can mask a lot of waste. If 100 users are served only one ad, and one user is served 500 ads, the average frequency is six but 99% of those impressions are wasted. Look at the distribution of ads by frequency tier to see where waste is occurring.

For strategic change to occur, brands and their agencies must lead the way. In this case, “leading” means implementing controls and making their vendors accountable for quality and performance of display media.

  • Brands must demand more accountability from their agencies. They also need to equip them with the tools and solutions to address the underlying problems.
  • Agencies must demand better controls and make-goods from media vendors. Until we have better controls for preventing fraud and improving the quality of reach and frequency, media vendors need to stand behind their product, enforce frequency caps and make internal investments to improve the quality and efficiency of their inventory.
  • All buyers must make their DSPs and exchanges accountable for implementing more comprehensive solutions to address the fraud and frequency problems.


The Opportunity

We can’t expect a utopian world where no ads are wasted, but we can and should make dramatic improvements. By reducing waste, advertisers will see even greater returns from display media. Higher returns translate into larger media budget allocations, and that will benefit us all.

While fixing the problems may dampen near-term RTB growth prospects, it will serve everyone in the long run. Removing waste and improving quality of media will help avoid a bubble while contributing to the sustainable growth of the digital media industry.  Given the growing momentum in the public and private equity markets, I hope we as an industry take action sooner rather than later.

As always, comments are welcome.

Steve Latham

Encore Twitter