Author Archive

The Growing Need for Device Bridging

October 28th, 2014

tapad joinedAs an industry, we are quickly moving to a “mobile first” world where mobile engagement is becoming an increasingly important part of the customer
journey in most considered purchases.  From a targeting standpoint, digital publishers have done a decent job of assembling the components to engage an individuals across desktops, laptops, mobile phones and tablets.  But on the measurement side of the spectrum, as an industry we are behind the curve.

Digital tracking platforms use cookies or alternative IDs to track each browser as an individual user.  As consumers are increasingly using multiple devices as part of their everyday lives, single-screen tracking provides a limited view of user-level interaction.  Even on your mobile device, the lack of unification between your mobile browser and in-app engagement makes it difficult for publishers and advertisers to know you’re the same person.  When you factor in the use of tablets, desktops and laptops, the challenge becomes even more complex.  Let’s use the following example to illustrate the challenge:

Suppose you see a Brand X Travel ad on your iPhone’s mobile browser and you remember you need to book a holidayplane ticket home.  You then go to the App store and download the new Brand X Travel app and book a flight.  Unfortunately, the Brand X marketers will not know that the mobile ad you saw on Safari drove the purchase you just made on their mobile App.  To them, you appear to be two different users.

In your confirmation email is an ad for a great rate on a rental car.  You see it on your phone and make a mental note to do some research tomorrow.  The next day you visit the Brand X travel site on your desktop and check out rates. Unless you sign in using the same ID as your mobile app, Brand X will not know you are the same person who just booked a flight.  Again, they will classify you as a unique user.

This example illustrates the challenges in creating a unified view of each customer, even for the sites you do business with.  If you’re like the majority of advertisers who do not have user log-ins, the challenge is even harder.  In the example above, you appear to be 3 unique users and the advertiser cannot tell that the mobile ad drove a plane ticket and consideration for a rental car.  Without taking the steps to bridge devices and unify User IDs, its hard to accurately measure the true reach, frequency and digital interactions in the conversion path.  And this makes it hard to fully optimize your digital media strategy.

 tapad uniques

The good news is that solutions exist to address this problem.  In recent years many innovative companies have emerged to address the gaps in device bridging and data unification.  While no one solution offers the silver bullet to address this problem, point solutions are available to:

  • Bridge mobile browsers to in-app engagementtapad connected
  • Bridge mobile devices to desktops and laptops
  • Connect all device IDs to one universal ID that can be used for online and offline CRM efforts

By bridging device-specific data at the unique user level, we can connect all impressions, visits and conversions associated with each customer journey, regardless of platform or device.  We can then see the true reach, frequency and cross-platform synergies for each campaign.

Armed with these insights, marketers can plan more effectively, reduce waste and improve digital ROI while better understanding how and when consumers engage with their brand.

(Images courtesy of Tapad)

As always, comments are welcome!

Steve Latham
@stevelatham


Encore Twitter
Contact-us3c

 

 

 

Observations on the Attribution Market

July 7th, 2014


chart-blue
The market for Attribution companies has definitely heated up with high profile acquisitions by Google and AOL.  I view these transactions as strong proof points that brands and their agencies are starving for advanced data-driven insights to optimize their investments in digital media.  The same thesis that led us to start Encore several years ago still holds true today: traditional metrics are no longer sufficient and advanced insights are needed to truly understand what works, what doesn’t, and how to improve ROI from marketing dollars.

Over the years we’ve analyzed more than 100 brand and agency campaigns of all sizes – from the largest CPG companies in the world, to emerging challengers in Retail, Automotive, Travel and B2B.  Based on these experiences, here are 5 observations that I’ll share today:

  1. We are still early in the adoption curve.   While many brands and agencies have invested in pilots and proofs of concept, enterprise-wide (or agency-wide) adoption of fractional attribution metrics is still relatively low, and the big growth curve is still ahead of us.  About 18 months ago I wrote about 2013 being the year Attribution Crosses the Chasm.  I now see I was a bit early in my prediction – 2014 is clearly the year Attribution grows up.
  2. There is still some confusion about who should “own” cross-channel / full-funnel attribution.  Historically brands have delegated media measurement to their agencies.  We now see brands taking on a more active role in deciding how big data is used to analyze and inform media buys.  And as the silos are falling, the measurement needs of the advertiser often transcend the purview of their media agency.  In my opinion, responsibility for measurement of Paid, Owned and Earned media will increasingly shift from the agencies to the brands they serve.  This is already the case for many CPG companies we serve.  In measuring media for more than a dozen big consumer brands, we’re seeing the in-house teams setting direction and strategy, while agencies play a supporting role in the measurement and optimization process.  We’re happy to work with either side; they just need to decide who owns the responsibility for insights.
  3. Multi-platform measurement is coming, but not as fast as you might think.  We are big believers in the need for device bridging and multi-platform measurement and are working with great companies like Tapad to address the unmet need of unifying data to have a more comprehensive view of customer engagement.  To date we’ve presented Device Bridging POVs to most of our customers.  And while are interested in this subject, very few will invest this year.  It’s not that the demand isn’t there – it will just take some time to mature.
  4. Marketers need objective and independent insights – now more than ever.  Despite increasing efforts by big media companies to bundle analytics with their media, the days of relying on a media vendor to tell you how well their ads performed are limited.  It’s fine to get their take of how they contributed to your business goals, but agencies and brands need objective 3rd party insights to validate the true impact of each media buy.  And with the growing reliance on exchange-traded media and machine-based decisioning, objective, expert analysis is needed more than ever to de-risk spend and improve ROI.   We’ve found this approach works well – especially in days like these where it’s all about sales.  This leads to my 4th observation…
  5. In the end it’s about Sales.  While digital KPIs are great for measuring online engagement, we’re seeing more and more interest in connecting digital engagement to offline sales.  Again, we’re fortunate to work with great partners like (m)PHASIZE to connect the dots and show the true impact of digital spend on offline sales.  We’re also working on opportunities with LiveRamp and Mastercard to achieve similar goals.  Like device bridging, I see this becoming more of a must-have in 2015, but it’s good to have the conversations today.

There is so much more to discuss and I’m sure our market will continue to iterate and evolve quickly.  But to sum it up, it’s an exciting time to be in the digital media measurement space. Attribution is finally coming of age and it’s going to be a hell of a ride for the next few years.

As always, comments are welcome!

Steve Latham
@stevelatham


Encore Twitter
Contact-us3c

 

 

Inefficiencies of Exchange Traded Media

January 21st, 2014

Encore’s latest POV on Inefficiencies of Exchange Traded Media was published by AdExchanger on January 21, 2014.  You can read the article on AdExchanger or read the POV below.

While exchange-traded media is praised for bringing  efficiency to the display ad market, a deeper dive reveals numerous factors are costing advertisers billions of dollars in wasted  spend.  While programmatic buying is relatively efficient (compared to other media), on absolute basis, a lot of wasted spend generally goes unnoticed.

Research shows that perverse incentives, a lack of controls and limited use of advanced analytical tools have made a majority of exchange-traded media worthless.  While we advance how we buy and sell media, there is still significant room for improvement in the quality and economic returns from real-time bidding (RTB).

 

Where Waste Occurs

Optimizing media starts with eliminating wasted spending. In the RTB world, waste can take many forms:

  • burningmoneyFraud: Either 1x1s sold into exchanges to generate revenue or impressions served to bots, or non-human traffic.
  • Non-viewable ads: These are legitimate ads that are not viewable by the user.
  • Low-Quality inventory: Refers to ads served on pages whose primary purpose is to house six, eight or more than 10 ads.
  • Insufficient frequency: Too few ads served per user – one or two – to create desired awareness.
  • Excessive frequency: Too many ads served to individual users – more than 100, 500 or more RTB impressions over 30 days
  • Redundant reach: Multiple vendors target the same users. This is often a consequence of vendors using the same retargeting or behavioral tactics to reach the same audiences.

 

Quantifying The Costs

The percentage of wasted impressions varies by campaign, but it’s usually quite significant. Here are some general ranges of wasted RTB impressions:

  • +/- 20% of exchange-traded inventory is deemed fraudulent, according to the Integral Ad Science Semi-Annual Review 2013[TH1] .
  • +/- 7% of viewable inventory is served on ad farm pages (more than six ads)
  • +/- 60% of legitimate inventory is not viewable per IAB standard
  • 10 to 40% of Imps are served to users with frequency that is too low to influence their behavior
  • 5 to 30% of Imps are served to users with frequency greater than 100 over the previous 30 days (the more vendors, the higher the waste due to redundant reach and excessive retargeting)

To put this in the context of a typical campaign, assume 100 million RTB Impressions are served in a given month.

RTB waste infographic crop

 

In most cases, less than 20% of RTB impressions are viewable by humans on legitimate sites with appropriate frequency. In other words, 20% of all Impressions drive 99% of the results from programmatic buying.  Because RTB impressions are so inexpensive, it’s still a very cost-effective channel.  That said, there is considerable room for improvement within RTB buying.

Who’s To Blame?

When we present these analyses to clients, the first question often asked is, “Who’s to blame?” Unfortunately, there is no single culprit behind the RTB inventory problem. As mentioned, the problem is due largely to a lack of controls and perverse incentives.

  • Lack of Controls: While a growing number of brands and agencies are incorporating viewability and using algorithmic analytical tools, most are still in the dark ages. Some feel their results are “good enough” and choose not to dig deeper. Others seem not to care. Hopefully this will change.
  • Perverse incentives: We live in a CPM world where everyone in the RTB value chain – save the advertiser) – profits from wasted spending. It’s not just the DSPs, exchanges and ad networks that benefit; traditional publishers now extend their inventory through RTB and unknowingly contribute to the problems mentioned above. While steps are being taken to address these issues, we’re not going to see dramatic improvement until the status quo is challenged.

 

How To Fix The Problem

The good news is that the RTB inventory problems are solvable. Some tactical fixes include:

  • We should invest in viewability, fraud detection and prevention, and algorithmic attribution solutions. While not expensive, they do require a modest investment of time, energy and budget. But when you consider the cost of doing nothing – and wasting 50 to 80% of spending – the business case for investing is very compelling.
  • We need to stop using multiple trading desks and RTB ad networks on a single campaign, or they’ll end up competing against each other for the same impressions. This will reduce the redundant reach and excessive frequency while keeping a lid on CPMs. It will also make it easier to pinpoint problems when they occur.
  • Finally, we need to analyze frequency distribution each month. Average frequency is a bad metric as it can mask a lot of waste. If 100 users are served only one ad, and one user is served 500 ads, the average frequency is six but 99% of those impressions are wasted. Look at the distribution of ads by frequency tier to see where waste is occurring.

For strategic change to occur, brands and their agencies must lead the way. In this case, “leading” means implementing controls and making their vendors accountable for quality and performance of display media.

  • Brands must demand more accountability from their agencies. They also need to equip them with the tools and solutions to address the underlying problems.
  • Agencies must demand better controls and make-goods from media vendors. Until we have better controls for preventing fraud and improving the quality of reach and frequency, media vendors need to stand behind their product, enforce frequency caps and make internal investments to improve the quality and efficiency of their inventory.
  • All buyers must make their DSPs and exchanges accountable for implementing more comprehensive solutions to address the fraud and frequency problems.

 

The Opportunity

We can’t expect a utopian world where no ads are wasted, but we can and should make dramatic improvements. By reducing waste, advertisers will see even greater returns from display media. Higher returns translate into larger media budget allocations, and that will benefit us all.

While fixing the problems may dampen near-term RTB growth prospects, it will serve everyone in the long run. Removing waste and improving quality of media will help avoid a bubble while contributing to the sustainable growth of the digital media industry.  Given the growing momentum in the public and private equity markets, I hope we as an industry take action sooner rather than later.

As always, comments are welcome.

Steve Latham
@stevelatham


Encore Twitter
Contact-us3c


 

Algorithmic Attribution SES Chicago

November 7th, 2013

Screen Shot 2013-11-07 at 11.29.01 AM At SES Chicago I introduced Algorithmic Attribution and discussed the implications for search marketers.  Please feel free to download and let me know if you have any questions!

Download pdf:  Algorithmic Attribution SESChicago2013

Steve Latham
@stevelatham

 

Demystifying Attribution: Which Approach is Best?

June 23rd, 2012

As published by Adotas 6/22/12 (view article)

Digital media attribution is a hot topic, but it’s still a confusing concept for many. Given a lack of industry standards, and an ever-expanding list of attribution methodologies, it can be difficult for marketers to determine exactly what they need. This post intends to educate and enable marketers to optimize digital spend through advanced insights.

The Funnel and Attribution

Simply defined, attribution is allocating credit to each interaction that drives a desired action (visit, goal page view, conversion, etc.). Within this broad definition, there are two primary distinctions:

• Lower-funnel or click-based attribution incorporates assist clicks when allocating credit for conversions. Compared to last-click reporting, this is a step in the right direction. The limitation to lower-funnel attribution is that it severely discounts the role of display advertising while overstating the role of search, affiliate, email and other click-centric media. For online advertisers seeking a more complete picture, a full-funnel view is required.

• Full-funnel attribution builds on click-based attribution by incorporating assist impressions from display ads (video, rich media, flash and .gifs) when allocating credit for visits or conversions. Recognizing that display ads can be very effective, even in the absence of clicks, a full-Funnel attribution model is needed to quantify the true impact display ads have in creating awareness, consideration and preference.

Cross-Channel Attribution

Cross-channel attribution addresses the role of each digital channel (display, paid search, natural search, email, affiliate, etc.) plays in the customer engagement process. While conversion paths are interesting, they aren’t very actionable. To truly understand and optimize each channel, you must allocate fractional credit to each channel and placement that contribute to a measurable action. This generally results in a shifting of credit from non-paid channels (organic search, direct navigation, referring sites), back to paid media (display, paid search, email, etc.) that “fed” the non-paid channels. Before diving into weighting methodologies, let’s first look at leading approaches to attribution.

Three Approaches to Attribution

While there are many approaches to attribution, here are three you should be familiar with:

• Statistical attribution is based on traditional media-mix modeling, which relates to analysis of disparate data sets. In this approach, you would analyze three months of impression data and three months of conversion data and look for relationships between the data sets. At best, this approach can provide high-level directional signals. If you want granular insights into the impact of each channel, vendor, placement or keyword, you need a more granular approach.

• A/B testing seeks to attribute credit and validate causation by observing results from pre-defined combinations of media placements. A/B testing can be used to measure display’s impact on results from search, as well as the performance of a specific creative, vendor, market or channel. While A/B testing is a great way to observe directional insights, it’s nearly impossible to exclude or account for other factors (seasonality, competitors, macro-economics, weather, etc.) that might impact performance between the control and test groups.

Operational attribution takes a bottom-up (visitor-based) approach to analyzing and allocating credit to impressions, clicks and visits that precede each conversion. With operational attribution, there is no need to calculate possible conversion paths – you have the actual data, which provides for a more granular and accurate data set for advanced analysis. (i.e. heuristic and/or statistical modeling). As with any approach, caution must be exercised to define the appropriate look-back window and cleanse the data set and exclude factors (e.g. wasted impressions and post-conversion visits) that might skew the results.

Manual vs. Statistical Weightings

With the framework of operational attribution, weighing assist impressions and clicks is both art and science. Here are two primary approaches:

• Subjective weighting of impressions and clicks: First-generation platforms require the marketer to define the rules for allocating credit to each interaction. While this approach is easy and flexible, it lacks statistical rigor and entails too much guesswork. Moreover, it allows the operator to influence outcomes through the assumptions that drive the model. But the biggest problem is that it allocates credit based on the actual number of assist impressions, rather than using observable data to model how many are actually needed. For example, you may find that 12 impressions preceded an average conversion, when only six impressions were actually needed. If you give credit for wasted impressions, you end up rewarding vendors for over-serving customers.

To reduce subjectivity and improve accuracy in how impressions and clicks are weighted, we must use machine learning and algorithmic modeling.

• Algorithmic weighting: To remove the guesswork in attribution can use machine learning and proven algorithms to calculate probability-based weightings for assist impressions and clicks. This removes the arbitrary nature of manual weightings and provides much higher levels of confidence and comfort for marketers.

There are numerous approaches to statistical modeling and there are plenty of vendors vying for the “best math” award. While it’s hard to say which approach is best, we have seen that most prefer transparency vs. opacity, and known algorithms to proprietary models. If you’re going to put your neck on the line to defend a new measurement standard, you should be comfortable with the approach and underlying assumptions. In general, a transparent, statistically validated approach is best.

Understanding Tradeoffs

There is an endless number of attributes you can seek to measure, including segment, format, audience demographics, creative concept, message, frequency, day-part, sequence, etc.  While the idea of Metric Nirvana is appealing, it’s also very elusive if you try to get there overnight.

It’s important to recognize that the more granular you get, the more data you need and the more complex the setup, production, reporting and analysis. Many attempts to go directly from last-click to advanced micro-attribution fail due to the complexity of implementation and analysis.  We all crawled before we walked, and walked before we ran. Analytics should be no different.

If you’re still using last-click metrics, start with channel-level and publisher-level attribution. Once you’ve identified your top performing vendors, delve deeper sequentially (as opposed to all at once) by looking at recency, format, creative and other variables that may have incremental impact on the end results. Just remember that with each incremental value, the scale, complexity, and risk of failure increase dramatically. So start with the low-hanging fruit, and work your way up the tree.

To sum all this up, we are Encore and MediaMind advocate the following best practices:

Operational, full-funnel and cross-channel attribution.
Use machine learning to weigh and allocate fractional credit to assist impressions and clicks.
Improve your chances of success by starting with basics and getting more granular over time.

Hopefully you now have a better understanding of the attribution landscape and some of the distinctions within it.  As always, feel free to comment, tweet, like, post, share, or whatever it is you do in your own social sphere.  Thanks for stopping by!

@stevelatham

Encore TwitterContact-us3c

 

Takeaways: Display Ecosystem Panel Discussion

May 7th, 2012

 

Last month I had the pleasure of moderating the Display Ecosystem panel (View the Video) at Rapleaf’s 2012 Personalization Summit.  On my panel were experts from leading companies that represented numerous categories within the display landscape.  Panelists included:

  • Arjun Dev Arora – CEO/Founder, ReTargeter @arjundarora
  • Key Compton – SVP Corporate Development, Clearspring @keycompton
  • Tod Sacerdoti – CEO & Founder, BrightRoll @todsacerdoti
  • Mark Zagorski – CEO, eXelate @markzexelate

Our discussion addressed many of the issues that we are grappling with in the Ad-Tech industry, including:

  • Complexity: The challenges of planning, executing, measuring and optimizing display media are exacerbated by the complexity in our space.  How can we reduce the cost and level of effort required via integration, prioritization, standards, etc.?
  • Consolidation: What will the landscape look like in 2 years?  Will there be more or fewer players?  Where will consolidation take place?  Who will be acquired and by whom?
  • Effectiveness: What can the industry do to improve performance and effectiveness of advertising? How will better targeting, data-driven personalization, frequency management and 360 customer-centric approaches improve efficacy of online marketing?
  • Accountability: Where are the gaps today, and how should we be measuring results, performance, ROI, etc?• Outlook for publishers, ad networks, DSPs and agencies.  What must each do to survive / thrive in this hyper-competitive marketplace?
  • Other issues: privacy, legislation, new platforms, etc.  In order to fully realize the potential of display advertising (i.e. Google’s $200bn forecast) these will need to be addressed.

After our discussion, I thought about the implications for the Display Ad ecosystem, and for the Ad-Tech industry as a whole.  Here are a few of my thoughts…

  • No other industry is as innovative, adaptive and hyper-competitive as ad-tech. Where else can new niches evolve to multi-million dollar categories overnight with hundreds of startups raising billions in financing every year?  We’ve all seen industries where startups disrupted an established ecosystem for a period of time.  But where else does this happen over and over and over again?  Our industry is all about disruption and it doesn’t take long for the challenger startups to become the established incumbents or targets.
  • No other industry creates wealth like ad-tech.  Where else can companies launch, raise capital and exit for hundreds of millions (or more) in less than 18 months?  Where else are so many successful entrepreneurs (and their benevolent VC backers) rewarded with lifetime wealth for 1-3 years of work?  It’s pretty amazing if you think about it… our modern day decade-long gold rush.
  • Success in our industry requires mastery of several disciplines: marketing, technology and data science.  You can’t be a world-class ad-tech company without expertise and experience in all 3 of these categories.
  • While we are making progress as an industry, we still have so far to go.  Despite the advances in targeting, real-time bidding dynamic creative optimization, analytics and optimization techniques, most media buying is still done the same way it was 5 years ago.
  • There is still much confusion about how real-time exchanges work, and how they can be utilized by agencies and advertisers.  When you overlay that with efforts to aggregate 1st party data, creating proprietary cookie pools and using that data to find new audiences, many marketers become quickly overwhelmed.
  • We still have a scale problem that must be addressed.  While there is a huge supply of impressions available for real time bidding, there are only so many unique audiences in the warehouses operated by the data providers.  The more granular you get from  a targeting standpoint, the smaller your reach wil be.  Frequency capping is challenging, so you end up with hundreds or event thousands of impressions being served to a small pool of unique users.
  • We still have a people problem.  All the technology in the world won’t save us if we don’t have people trained to leverage these capabilities.  We also need a deeper pool of managers and leaders who can bring operational excellence to a fledgling, always-evolving industry.

The wall mural below sums up the discussion – and made for a nice graphic snack for attendees.

 

As always, feel free to comment, tweet, like, post, share, or whatever it is you do in your own social sphere.  Thanks for stopping by!

@stevelatham

Encore TwitterContact-us3c

 

Ad-Tech Attribution Case Study

April 25th, 2012

In April 2012 I presented a case study on Full-Funnel Attribution at the granddaddy of all industry conferences: Ad-Tech in San Francisco.

I was honored to share the stage with Young-Bean Song, a pre-eminent thought leaders in digital media measurement and analytics (and a very nice guy).  After years of applying to speak at Ad-Tech, I was finally selected; not because I’m the world’s most pre-eminent speaker but because the case study we developed is so effective at presenting how advanced analytics and full-funnel, cross-channel Attribution can be utilized to maximize performance and boost Return On Spend.

Among the highlights of the case study, we demonstrated:

  • How converters who were exposed to display ads followed a range of conversion paths before taking the desired action(s).
  • How attributing fractional credit for assist impressions and clicks (beyond just the last click) yielded much deeper insights into the performance of each channel, vendor, placement and keyword.
  • How recency, or the time lag between the first impression, last impression, visit and conversion) impacted performance.
  • How frequency is still a big issue that needs to be addressed – especially when buying exchange-traded media.

For those who didn’t make the show, I’m happy to share the case study in two formats (both are hosted on slideshare):

If you’d like to learn more about Attribution or discuss the case study, please drop me a line (see Contact link below).  Also please feel free to comment, tweet, like, post, share, etc. as you see fit.  Thanks for your time and interest!

@stevelatham

Encore TwitterContact-us3c

 

It’s Hard to Solve Problems from an Ivory Tower

March 2nd, 2012

Today a colleague sent me a link to a new article on Attribution and media measurement with a request to share my thoughts. Written by a statistician, it was the latest in a series of published perspectives on how Attribution should be done. When I read it, several things occurred to me (and prompted me to blog about it):

  1. Are we still at a point where we have to argue against last-click attribution?  If so, who is actually arguing for it?  And are we already at a point where we can start criticizing those few pioneers who are testing attribution methodologies?
  2. Would a media planner (usually the person tasked with optimizing campaigns) understand what the author meant in his critique: “the problem with this approach is that it can’t properly handle the complex non-linear interactions of the real world, and therefore will never result in a completely optimal set of recommendations”?  It may be a technical audience, but we’re still marketers… right?
  3. The article discusses “problems” that only a few of the largest, most advanced advertisers have even thought about.  When it comes to analytics and media measurement, 95% of advertisers are still in first grade, using CTRs and direct-conversions as the primary metric for online marketing success. They have a lot of ground to cover before they are even at a point where they can make the mistakes the author is pointing out.

In reading the comments below the article, my mind drifted back to business school (or was it my brief stint in management consulting?) and the theoretical discussions that took place among pontificating strategists.   And then it hit me… even in one of the most innovative, entrepreneurial and growth-oriented industries, an Ivory Tower mindset somehow still persists in some corners of agencies, corporations, media shops and solution providers.  Not afraid to share my views, I responded to the article in what I hope was a polite and direct way of saying “stop theorizing and focus on the real problem.” Here is my post:

“…We all agree that you need a statistically validated attribution model to assign weightings and re-allocate credit to assist impressions and clicks (is anyone taking the other side of this argument?).  And we all agree that online is not the only channel that shapes brand preferences and drive intent to purchase.

I sympathize with Mr. X – it’s not easy (or economically feasible) for most advertisers to understand every brand interaction (offline and online) that influences a sale. The more you learn about this problem, the more you realize how hard it is to solve.  So I agree with Mr. Y’s comment that we should focus on what we can measure, and use statistical analyses (coupled with common sense) to reach the best conclusions we can. And we need to do it efficiently and cost-effectively.

While we’d all love to have a 99.9% answer to every question re: attribution and causation, there will always be some margin of error and/or room for disagreement. There are many practitioners (solution providers and in-house data science teams) that have studied the problem and developed statistical approaches to attributing credit in a way that is more than sufficient for most marketers.  Our problem is not that the perfect solution doesn’t exist. It’s that most marketers are still hesitant to change the way they measure media (even when they know better).

The roadblocks to industry adoption are not the lack of smart solutions or questionable efficacy, but rather the cost and level of effort required to deploy and manage a solution.  The challenge is exacerbated by a widespread lack of resources within the organizations that have to implement and manage them: the agencies who are being paid less to do more every year.  Until we address these issues and make it easy for agencies and brands to realize meaningful insights, we’ll continue to struggle in our battle against inertia. For more on this, see “Ph.D Targeting & First Grade Metrics…”

I then emailed one of the smartest guys I know (data scientist for a top ad-tech company) with a link to the article and thought his reply was worth sharing:

“I think people are entirely unrealistic, and it seems they say no to progress unless you can offer Nirvana.”

This brings me to the title of this post: It’s hard to solve problems from an Ivory tower.  Note that this is not directed at the author of the article, but rather a mindset that persists in every industry.  My point is that arm-chair quarterbacks do not solve problems. We need practical solutions that make economic sense.  Unless you are blessed with abundant time, energy and resources, you have to strike a balance between “good enough” and the opportunity cost of allocating any more time to the problem.   This is not to say shoddy work is acceptable; as stated above, statistical analysis and validation is the best practice we preach and practice.  But even so-called “arbitrary” allocation of credit to interactions that precede conversions is better than last-click attribution.  It all depends on your budget, resources and the value of advanced insights.  Each marketer needs to determine what is good enough, and how to allocate their resources accordingly.

Most of us learned this tradeoff when studying for finals in college: if you can study 3 hours and make a 90, or invest another 3 hours to make a 97 (recognizing that 100 is impossible), which path would you choose?  In my book, an A is an A, and with those 3 additional hours you could have prepared for another test, sold your text books or drank beer with your friends.  Either way, you would extract more value from your limited time and energy.

To sum up, we need to focus our energies away from theoretical debates on analytics and media measurement, and address the issues that prohibit progress.  The absence of a perfect solution is not an excuse to do nothing. And more often than not, the perfect solution is not worth the incremental cost and effort.

As always, feel free to comment, tweet, like, post, share, or whatever it is you do in your own social sphere.  Thanks for stopping by!

@stevelatham

Encore TwitterContact-us3c

 

 

OMMA Metrics Panel Video: Social Media ROI

June 30th, 2011

Encore founder and ceo Steve Latham recently moderated the “Measuring Social ROI” discussion at the OMMA Metrics NYC Conference in March 2011.  The big questions addressed were:

1. Social Media: Shiny Object or ROI Producer?
2. What are brands doing to measure the impact of social ROI?
3. What works and how do you know?

These questions were discussed by industry thought leaders and expert practitioners from across the country including:

- Adam Cahill, EVP Media Director, Hill Holliday
- Ben Straley, CEO & CO-Founder, Meteor Solutions                                                                  \
- Jonathan Mendez, Founder & CEO, Yieldbot
- John Lovett, Senior Partner & Principal Consultant, Web Analytics Demystified, Inc.
- Jascha Kaykas-Wolff, VP of Marketing, Involver
- Moderator: Steve Latham, Founder and CEO, Encore Media Metrics

A video of the panel is embedded for viewing below.  You may also view it on ustream.

 

As always, feel free to comment and share!

The Encore Team

Encore logo

 

Encore TwitterContact-us3c

 

Display Advertising Landscape

June 10th, 2011

In early June I was fortunate to be one of 350 ad tech CEOs who attended LUMA Partners’ Digital Media Summit in NYC, featuring the best and brightest in the industry.  I’ve been to some great networking events before (IAB, 4A’s, etc.) but this was tough to beat.

In addition to meeting some amazing people, one of the highlights was the release of the latest display ad landscape or “LUMAscape” aka “the slide” that was originally produced by Terence Kawaja in 2010.  For those who are new to display advertising (or have been out of the market for the last 3 years), buying display media is like buying a house: you also need phone service, internet, cable, gas, electricity, dog-walking, etc.  In this case, Media is the house; ancillary services include ad verification, OBA compliance, data/tag management, audience measurement, ad serving, and our favorite: attribution.

The newest version of the slide is getting ever closer to accurately depicting all the segments and sub-segments that comprise the digital advertising landscape.  It also marked the debut of Encore Media Metrics as a recognized leader in the Attribution and Measurement category.

“The Slide”may also viewed on slideshare or you can download the LUMA Display Landscape here.

The industry is extremely fragmented, and is likely to stay that way for a while.  So if you want to play in the display advertising space (either as a buyer, seller or manager) you need to understand the difference between a DSP, DMP and SSP without yelling “WTF!”  Yes, it’s easier said than done but this map should help you get started.

Steve Latham (@stevelatham)

Encore logo

 

Spur Interactive TwitterContact-us3c