Monday, April 30, 2007

Can LTV Really Replace Product-Based Metrics?

For some time now, I’ve been touting lifetime value as the magic cure-all for customer management programs. I still believe LTV is the essential foundation but recognize that product-oriented measurement will not simply vanish once LTV appears. As long products are what people purchase, companies will need product managers to nurture them and will create product-level profit statements to judge their performance.

I’m tempted to wave my LTV magic wand and argue that products themselves could be measured on their LTV contribution rather than traditional profit-and-loss. There’s considerable logic to this: if you had a product that was itself profitable but so annoyed customers that they never purchased from you again, wouldn't you want to know about this? The problem is that LTV measures inherently rely on projections, which feel less reliable to many people than numbers that simply record actual transactions. And, let’s face it, identifying the incremental impact of a particular product on a customer’s LTV is quite a challenge—at least one order of magnitude more difficult than measuring the LTV as a whole.

I don’t have a solution. One option is to double-down on LTV, working to develop the incremental impact measures and to make them credible throughout the organization. You would then essentially banish product profitability as a measure—not literally, but by deemphasizing it in reports and compensation programs. This could probably be done with strong management leadership but it would definitely be a strain.

The other choice is to find a way to use product profitability measures so they reinforce rather than conflict with LTV results. The traditional approach to this is transfer pricing, but nobody ever finds that satisfactory. Nor do I see exactly how transfer pricing fits in here. Going back to first principles, the reason we need to look at product performance is we want to incent product managers to build the best possible products and to ensure they are being sold as broadly as possible. Products do, after all, represent a major investment of corporate resources and we want to maximize return on that investment.

But is it possible to define the “best possible products” in any terms other than their impact on lifetime value? I think not. We’ve already seen why looking at product profitability in isolation is likely to be misleading. So it seems committing fully to LTV is the only real choice.

This is a classic dilemma. On the one hand, “failure is not an option”. On the other hand, no matter how hard we try, failure is a real possibility. So while trying very hard to make LTV work, we have to consider alternatives if we can’t come up with an effective approach. This presumably means some modified version of product profitability that incorporates certain LTV concepts. I don't know what that would look like, but will let you know if I come up with any ideas.

Friday, April 27, 2007

The Downside of On-Demand

Several thoughts are swirling through my mind just now. Some are actually printable. One is this notion of “on demand” customer analytics, and what it says about the concept of “on demand” software in general. The other is the separation of analytic and execution systems, a fundamental distinction that seems increasingly questionable.

Thoughts of on demand analytics are of course prompted by yesterday’s purchase of Loyalty Matrix by Responsys, which Responsys described as a step toward becoming an “on demand marketing company.” Loyalty Matrix and Responsys are both “on demand” in the sense of delivering their services over the Internet. Maybe that’s all they mean by the label, and in that sense it’s certainly correct.

But I have come to associate “on demand” with “instant deployment,” and was at least initially intrigued at the prospect of services I could turn on almost immediately. (I’m not suggesting that Loyalty Matrix or Responsys make this claim.) Such speed is a major appeal for on demand systems in general. It is valid for products that don’t take much configuration or integration with other enterprise systems.

But customer history is important to customer management. That history is usually constructed by combining and standardizing data from multiple sources, and must be updated carefully to continue to make sense. Although traditional approaches could be improved upon, I don’t think the process can ever be removed altogether or made trivially simple. So it will always take a significant amount of time to deploy a customer database, even if it does become possible to instantly attach execution systems such as email senders.

Here’s where that distinction between analysis and execution systems comes in. Ever since people started building data warehouses, accepted wisdom has been that they should be separate. But analytics are increasingly embedded within execution systems to drive customer-specific treatments. My question is this: if the customer data portion of analysis systems can’t be delivered on demand, can analytically-enriched execution systems not be delivered on demand, either?

The obvious answer is: turn on the non-analytical portions of those systems at once and build up the analytical components later. Well, maybe. In the real world this would seem to mean doing the equivalent of a second implementation and not many people are going to like that idea. They’re more likely to either not use the analytical capabilities at all (pretty common, I suspect), or to delay implementing the system until they’re in place.

Sorry if I’m rambling a bit here—it’s the end of a long week. What I’m suggesting is we might need to look more carefully at whether the on demand model builds unrealistic expectations for deployment speed, and then leads to poor results as people sacrifice the sophistication to complete the project. Of course, such compromises are common with all sorts of systems. But if the on demand mindset makes them especially likely, we should bear that in mind when assessing on demand systems' value.

Thursday, April 26, 2007

Responsys Buys Loyalty Matrix

Speaking of integrated systems, consider yesterday’s announcement that email marketing provider Responsys has purchased Loyalty Matrix, which offers on-demand marketng analytics. According to the press release, Responsys hopes to become “the only on-demand marketing company capable of helping marketers not only deliver – but predict and deliver – the right messages to each customer, via email, direct mail and mobile.”

It’s tempting to read this as part of the on-going consolidation within the digital marketing industry, related to the Omniture/Touch Clarity and Acxiom Digital/Kefta deals announced earlier this year. But, judging from their Web site, Loyalty Matrix offers off-line analysis and promotion recommendations, not the real-time targeting and personalization of Touch Clarity and Kefta. Loyalty Matrix also extends beyond digital channels to take data from all sources and produce multi-channel contact strategies.

So something else is going on here. Responsys is positioning itself as a marketing services agency, combining strategy, analytics and execution, than a software developer or message delivery provider. This would put it more in competition with firms like Acxiom, Knowledgebase and Epsilon. Many of those companies already provide email execution similar to Responsys’s core business, so in some ways this could be seen as a defensive expansion by Responsys.

But the heart of a marketing services agency is its ability to maintain sophisticated marketing databases. This would be a big change for Responsys, which now works mostly with lists that clients generate elsewhere. Loyalty Matrix does some database building, but its more to integrate snapshots for analysis than to manage databases over time.

As I wrote yesterday, building the database is tougher than analyzing it. Loyalty Matrix could be the foundation of a new Responsys business or it could just add some analytical punch to its current offerings. It will be interesting to see which road Responsys follows.

Wednesday, April 25, 2007

Which Comes First, The Test or the Metric?

A Web marketer commented to me recently that he thought his firm should focus first on basic “blocking and tackling” before moving on to more sophisticated efforts. When I asked what he considered basic, he said analyses like understanding customer value. He considered the advanced projects to be things like site optimization and multi-variate testing.

My initial reaction was that he had it backwards. To me, it seems harder to assemble accurate customer data across multiple channels and systems than to implement off-the-shelf testing software.

But maybe he had it right. Even though assembling comprehensive customer data may be much more challenging, optimization without a proper measurement capability can lead to bad decisions. So even though it might be quicker and easier to set up the testing systems, it’s might be best to concentrate on assembling the customer data.

As a devotee of lifetime value, I find this a comforting notion. But the hard-nosed businessperson in me says testing will yield immediate benefits even if it uses short-term measures like response rate. You need to look at the back end too, and as quickly as possible. But building a comprehensive measurement system may take a long time. Not testing until that's done means deferring the benefits that testing can provide. Unless testing is likely to lead to harmful choices, it's worth moving ahead even with imperfect measurements.

Tuesday, April 24, 2007

Customer Experience Management Isn't Enough

Ron Shevlin’s comment on yesterday’s post concludes that “without more disruptive changes (re-org and fundamental change in strategy) -- even starting with a focus on the "customer experience" won't guarantee a sustained change.”

Ron is absolutely right and the implications are worth considering. “Customer experience” and “customer experience management” are not goals in themselves. They are ways to understand and run a business. People adopt them because they make the business more profitable than alternative techniques. If you don’t think that’s the justification, consider the opposite: would anyone advocate “customer experience management” if they thought it would makes companies less profitable?

This is precisely why I’ve been so focused on Lifetime Value: I see it as the link between customer experience management and financial performance. My view, perhaps naively, is that companies will adopt customer experience approaches once LTV has shown what customer experience management is worth.

But Ron’s suggestion of “re-org and fundamental change in strategy” raises the question of whether LTV is enough. We all know that organizations act in ways that are not purely rational. Certainly the personal and political interests that favor existing, product-based organization structures will not vanish simply because LTV analysis shows they yield suboptimal results.

But if financial measures won’t lead to change, what can? This leads straight to leadership. Top management must first believe that customer experience management is a superior strategy. Then they can take steps to make it happen.

It’s impossible to discuss this in any other than religious terms: it’s a matter of faith and vision. As with any religious belief, it’s hard to predict who will feel the call at any particular moment. But spreading the faith does require continued evangelism and support systems to help new converts sustain and deeper their engagement.

LTV is one of those support systems. CEM consultants, marketing agencies like Ron’s Epsilon and technology companies like James Taylor’s Fair Isaac (with its Enterprise Decision Management concepts) are others. I guess it’s not terribly flattering to think of oneself as a support system. But it’s an important role and helps many companies to move ahead. So it’s certainly worth the effort.

Monday, April 23, 2007

BAI Banking Strategies Article Shows Importance of Managing Complexity

Last Thursday’s post on ad hoc analytical systems prompted an interesting set of comments about overcoming product-based organization at banks. As it happens, the BAI’s online Banking Strategies magazine published a related article last November, called Uncovering the Hidden Cost of Complexity.

The article starts by suggesting that customer focus causes the product proliferation that makes good customer experiences so difficult to create: “As banks became more customer-focused over the last decade, they expanded their product set rapidly.”

This is an intriguing thought although it seems like blaming the victim for the crime. But after describing the problems caused by complexity, the authors move in the opposite direction. “The implication is not that banks should limit the variety of options to their customers, but that they should do the following” to ensure complexity adds value:

- Understand what complexity is valued by customers
- Quantify the hidden costs
- Identify the drivers and impacts of complexity
- Design processes to handle variety
- Innovate around advantages

You can read the article for the details. I’m not sure whether complexity is a problem in its own right or just a symptom of other issues. But I do agree with the authors' fundamental point that systems and processes must be designed to deal with complexity effectively. And I definitely agree with the authors’ focus on the quality of customer experience as the primary goal of each business. Any approach that starts from that premise can only lead in a good direction.

Friday, April 20, 2007

Experian Buys Hitwise to Track Web Audiences

It’s hard to find a short label for Experian, whose business is to gather data about people and companies and then use it for credit reports and marketing. The company’s Marketing Solutions group offers database building, targeting and message delivery for direct mail and electronic channels, as well as syndicated consumer research. Other groups track consumer and business credit, automobile ownership information, and online services for consumers.

Between its own online products and the online marketing services it offers its customers, Experian has a substantial presence in the digital marketing arena. It expanded this significantly yesterday with a $240 million purchase of Hitwise, a firm that tracks Web traffic and aggregated consumer behavior. Hitwise gathers traffic information from Internet Service Providers (ISPs), so it knows about site visits, search words, referrals, and other information contained in an HTTP header. Although it can’t connect that directly to individuals, it does supplement the ISP data with results from a panel of identified consumers. This allows it to report on behavior of different consumer segments.

In other words, Hitwise more closely resembles Simmons’ audience research than it does the databases of personal information built elsewhere at Experian. Like Simmons data, Hitwise results mostly help advertisers target their messages by understanding who sees the different media properties.

The notion of syndicated research panels seems almost quaint in the context of online behavior, where so much targeting is based information about specific individuals. But it’s a useful reminder that there are and will remain large gaps in customer knowledge, which aggregate data will fill as best it can. Marketers should not become so fascinated by the new techniques that they stop using the old ones.

Thursday, April 19, 2007

Are System Requirements Obsolete? (Probably not, but that should get your attention.)

I often joke about how easy it is to take two or three loosely related bits of information and declare them a trend. But there’s also an opposing tendency to think that nothing fundamental ever really changes. This makes it harder to recognize significant developments when they do occur.

One of those fundamentals has always been the need to define requirements before building or selecting a system. But I’m beginning to suspect that has changed. In systems, we see methodologies like “extreme programming” that advocate incremental development with a minimum of advance documentation. In marketing, we see rapid test cycles taking advantage of low cost and quick feedback on the Internet. We also see outsourced systems making it easy to try new methods and media with little investment or implementation time. In databases we see tools like QlikTech, SkyTide, Alterian and SmartFocus that radically reduce the effort needed to load and manipulate large volumes of data. If you want to stretch a bit, you could even argue that non-relational data sources, such as XML, text and graphics files, are part of the movement towards a less structured world.

This all takes some getting used to. Loyal readers of this blog (hi Mom!) know how much I like QlikTech (reminder: my company is a reseller), precisely because it lets me do complicated things to large amounts of data with little technical skill, hardware or cost. The other products I’ve mentioned offer similar flexibility, although they aren’t quite as easy or inexpensive.

My point is, we’re used to a world where advance planning is necessary because building a significant-sized system takes a great deal of skill and effort. These new tools radically reduce that skill and effort, making possible hands-on experimentation. The value should be obvious: we can bring in more data and look at it in different ways, making discoveries that would be unavailable if doing each analysis were more expensive.

We may need to revise our standard model of a business intelligence architecture to accommodate this. Specifically, we would add ad hoc analysis systems to the existing data warehouse and data marts. The ad hoc systems would hold data we may want to use, but haven’t quite figured out how. They would be a stepping stone in the development process, letting us experiment with data and applications until we understand them well enough to harden their design and add them to the regular production systems.

(Incidentally, some data warehouse gurus have proposed an "exploration warehouse" or "exploration data mart" that is a somewhat similar concept. They seem to have in mind more carefully processed data than I do, but I wouldn't want to claim to be the first to think along these lines.)

This notion may meet some resistance from IT departments, which are naturally reluctant to support yet another tool. But the price/performance ratios of these new products are so overwhelmingly superior to conventional technologies – at least when it comes to ad hoc analysis – that IT should ultimately see the advantage. End-users, particularly the technically skilled analysts who suffer the most from the rigidity of existing systems, should be more enthusiastic from the start.

Wednesday, April 18, 2007

Overcoming the Real Roadblock to Customer-Centric Management

A marketer who understands the need for customer-based management told me yesterday that the biggest obstacle to adoption is the intense product orientation of her business. That sounded familiar: I heard exactly the same thing last December at the National Center for Database Marketing in December (see my December 13 post).

Her comments were a useful dose of reality. It’s easy to forget that lack of customer-centric technology and skills are not the main obstacles to customer-based marketing. And, even though I can give you a detailed explanation of why product-centric structures are doomed to collapse, that doesn’t necessarily convince people to adopt a customer-centric approach as the alternative. So I do have to try harder to explain how companies will benefit from the transition.

One positive bit of news is that online marketing operations seem generally more open customer-centric than traditional channels. Maybe this is because the technology itself is so often based on customer profiles. Maybe it’s that initial online ventures often resided outside of the traditional product organization. Maybe it’s that the “free” nature of incremental online messages removes the cost constraints of other media, and thus exposes the need for someone to control the number of messages sent to customers. Whatever the reason, online marketers are more likely to function as gatekeepers of customer access. This puts them in a position to coordinate offers so they meet the needs of customers, not product managers. Maybe the resulting successes will lead other channels to adopt the same approach.

Tuesday, April 17, 2007

Knotice Integrates Digital Channels: Too Much, Too Little, or Just Right?

My research into mobile marketing systems generated a conversation last week with Brian Deagan, CEO of Knotice. As I wrote on April 3, Knotice stands out because it promises to integrate mobile marketing with email, Web and interactive TV campaigns. Brian confirmed that this is their specialty.

According to Brian, Knotice bases its claim on three capabilities:

- users can create interactive campaigns in all four media from the same system
- campaigns in all media access a shared customer profile
- medium-specific message templates can draw on “medium-agnostic” components in a shared content repository

This certainly gives Knotice greater breadth than products aimed at just one channel. On the other hand, there is a whole non-digital world out there including direct mail, telephones, print media, retail, and so on. There are broader-scope enterprise marketing systems and CRM systems that encompass those, although their mobile capabilities may be limited. And Knotice itself seems to lack some of the functions of mobile specialists like Enpocket, particularly for ad serving.

In other words, this is yet another example of that familiar tension between integrated suites and best-of-breed specialists. Knotice falls somewhere in between: from the enterprise perspective, it’s a digital media specialist; from the mobile perspective, it’s a suite. Maybe that’s a sweet spot (pun adamantly denied), and maybe it’s a dead zone (clever wireless reference definitely intended). Time will tell.

Monday, April 16, 2007

More Details on Enpocket

I had a brief but enlightening chat last week with Mike Baker, CEO of Enpocket. Mike clarified that Enpocket is as marketing services agency, which means its software is only used for its own services clients. He also said that Enpocket does indeed offer interactive messaging capabilities like voting, polling, surveys, coupons, store finders and such, even though its Web site may not highlight the fact. Mike called such services “table stakes” for companies who want to compete in this field, which is why they’re barely worth mentioning. (I’ve corrected my April 3 post on mobile marketing systems to reflect this.)

But our conversation did confirm that the primary focus at Enpocket is on advertising, rather than messaging. According to Mike, three things make them unique:

- multimodal transmission, which includes ads via SMS (text messages), MMS (multi-media messages), and insertion into other applications such as games, downloads, and video.

- a database marketing focus, using customer profiling to target ads. These profiles include previous transactions known to the cellular carriers, as well as demographics and other behaviors.

- propensity scores that identify the most appropriate offers for each customer. These are updated daily and made available to the ad server, which uses them to select content.

Thursday, April 12, 2007

Beware Distributed Analytics (You Read It Here First)

What might be called the “standard model” of business intelligence systems boils down to this: many operational sources feed data into a central repository which in turn supports analytical, reporting and operational systems. A refinement of the model divides the central repository into a data warehouse structured for analysis and an operational data store used for immediate access. (Come to think of it, I don’t see the term “operational data store” used much anymore—have I just stopped looking or did I miss the memo that renamed it?)

No one has been attacking this model directly (unless I’ve missed yet another memo), but it does seem to be under some strain. Specifically, much more analytical power is being built into the operational systems themselves, reducing the apparent need for centralized, external business intelligence functions. Examples abound:

- channel-specific analytical systems for Web sites and call centers (see last Thursday's post).
- more extensive business intelligence capabilities built into enterprise software systems like SAP
- rule- and model-driven interaction management components built into customer touchpoint systems
- more advanced testing functions built into operational processes (okay, we haven’t seen that one yet, but I did write about it yesterday).

Generally speaking, more analytical capability in operational systems is a Good Thing. But if these capabilities replace functions that would be otherwise provided by a centralized business intelligence system, they reduce the incremental value provided by that central system and therefore make it harder to justify. The resulting fragmentation of analytics is preferred by many departments anyway because it increases their autonomy.

This fragmentation is a Bad Thing, especially from the perspective of customer experience management. Fragmentation leads to inconsistent customer treatments and to decisions that are optimal for the specific department but not the enterprise as a whole.

The problem is that analysts, consultants, journalists and similar entertainers are always looking for new trends to champion. (Yes that includes me.) So coining a phrase like “distributed analytics” and calling it the Next Big Thing is immensely attractive. (FYI, a Google search on “distributed analytics” finds 64 hits, compared with 410,000 for “predictive analytics”. So the term is definitely available.)

We must all resist that temptation. A consolidated, centralized data view may seem old-fashioned, but it is still necessary. (Whether the data must be physically consolidated is quite another story—there’s nothing wrong with federated approaches.) By all means, analytical capabilities should be extended throughout the organization. But everyone should be working with the same, comprehensive information so their independent decisions still reflect the corporate perspective.

Wednesday, April 11, 2007

Operational Systems Should Be Designed with Testing in Mind

A direct marketing client pointed out to me recently that it can be very difficult to set up tests in operational systems.

There is no small irony in this. Direct marketers have always prided themselves on their ability to test what they do, in pointed contrast to the barely measurable results of conventional media. But his point was well taken. Although it’s fairly easy to set up tests for direct marketing promotions, testing customer treatments delivered through operational systems such as order processing is much more difficult. Those systems are designed with the assumption that all customers are treated the same. Forcing them to treat selected customers differently can require significant contortions.

This naturally led me to wonder what an operational system would look like if it had been designed from the start with testing in mind. A bit more reflection led me to the multivariate testing systems I have been looking at recently—Optimost, Offermatica, Memetrics, SiteSpect, Vertster. These take control of all customer treatments (within a limited domain), and therefore make delivering a test message no harder or easier than delivering a default message. If we treated them as a template for generic customer treatment systems, which functions would we copy?

I see several:

- segmentation capabilities, which can select customers for particular tests (or for customized treatment in general). You might generalize this further to include business rules and statistical models that determine exactly which treatments are applied to which customers: when you think about it, segmentation is just a special type of business rule.

- customer profiles, which make available all the information needed for segmentation/rules and hold tags that identify customers already tagged for a particular test

- content management features, which make existing content available to apply in tests. Some systems provide content creation as well, although I think this is a separate specialty that should usually remain external.

- test design functions, that help users create correctly-structured tests and then link them to the segmentation rules, data profiles and content needed to execute them. These design functions also include parameters such as date ranges, preview features for checking that they are set up correctly, workflow for approvals, and similar administrative features.

- reporting and analysis, so users can easily read test results, understand their implications, and use the knowledge effectively.

I don’t mean to suggest that existing multivariate testing systems should replace operational systems. The testing systems are mostly limited to Web sites and control only a few portions of a few pages within those sites. They sit on top of general purpose Web platforms which handle many other site capabilities. Using the testing systems to control all pages on a site without an underlying platform would be difficult if not impossible, and in any case isn’t what the testing systems are designed for.

Rather, my point is that developers of operational systems should use the testing products as models for how to finely control each customer experience, whether for testing or simply to tailor treatments to individual customer needs. Starting their design from the perspective of creating a powerful testing environment should enable them to understand more clearly what is needed to build a comprehensive solution, rather than trying to bolt on particular capabilities without grasping the underlying connections.

Monday, April 09, 2007

Mobile Marketing Systems Can't Stand Alone For Long

My recent exploration of mobile marketing systems left me wondering whether why the enterprise marketing systems have not jumped to service this market. By “enterprise marketing” vendors, I mean Unica, SAS, Teradata, Alterian, SmartFocus and the like. Their products can already create messages in SMS and (presumably) other mobile formats. But so far as I know, they lack the advanced mobile capabilities for ad serving, voting and polling, downloads such as ring tones, and user-initiated content uploads. It probably wouldn’t be too hard to add such features through alliances or purchase of a mobile marketing specialist. No doubt this will happen: they all added email in a similar fashion when it became important.

But these vendors’ priorities have clearly been elsewhere. They have been competing based on advanced analytics including segmentation, scoring and optimization; marketing administration including budgeting, project management and content libraries; marketing performance measurement; and specialized operational features such as distributed access and lead management. What these features share is that they support activities across many channels. Providing the most advanced features in a single channel, such as mobile, has not been the vendors’ goal.

You’ll immediately recognize this as the classic strategy of integrated suite vendors seeking to overwhelm best-of-breed specialists. Over time, the integrated products tend to win, although they are always threatened by still bigger fish: in the case of the customer management vendors, these would be enterprise software vendors like Oracle, SAP, and perhaps Infor.

It’s hard to see how the mobile marketing vendors can survive once mobile marketing attracts the attention of the enterprise marketing management systems. The mobile vendors already are trespassing on enterprise marketing turf by setting up their own functions for permissions tracking, campaign definition, customer profiles, real time interactions, and content management—capabilities needed to execute mobile marketing projects, but which really should belong in an enterprise system that can share them across multiple channels. Right now, mobile marketing is new enough that the specialist vendors’ unique expertise can justify working with them independently. But this is a very temporary situation. As mobile marketing becomes more widely understood, it will be assimilated into other marketing activities and the specialized systems will vanish.

Skytide Simplifies Customer Experience Analysis

Last Thursday’s post made a passing reference to Skytide , a vendor which may not be familiar to many readers. Here’s a bit more information, based on conversations with the company.

Essentially, Skytide extracts and aggregates user-selected elements from large data streams, such as Web logs or call center history files. The aggregated data is stored in a multi-dimensional structure where it can be accessed by the Skytide analyst’s workbench or by third party tools through JDBC, MDX and the system’s own API.

So far this sounds like other multi-dimensional database products. What sets Skytide apart is its use of XML and the Xpath language to access the source data. This allows it to read both conventional structured databases and unstructured data. (All sources, including the structured ones, are defined to the system as XML.) It also simplifies creation of the multi-dimensional models themselves by automating chores that are otherwise done manually, such as defining the members of each dimension.

Another of Xpath’s virtues is working with relations among “sibling” records (the equivalent of rows in a conventional database), which is difficult in a set-based language like SQL. This allows Skytide to report on paths through a Web session or phone system. Xpath also lets Skytide set up overlapping access hierarchies to the stored data, which is difficult in conventional multi-dimensional systems. On a practical level, alternative hierarchies greatly speed system deployment because users need not agree on a single data structure.

With any system that works by extracting data from source systems, a key question is the time required to read and store the reformatted data. Skytide says they do this nearly as fast as the data can be read—adding just ten to twenty percent on top of the i/o time. The input need not be presorted, since the XML tags and Xpath avoid the need for any physical storage hierarchy.

Skytide was launched in 2003 and has since matured to handle huge data volumes—the largest installations scan ten terabytes of source data per day. The company has been growing slowly and quietly with limited financing, although its dozen clients include impressive names like IBM, Sun and Akamai. It is now seeking additional funding to allow faster expansion.

Skytide is ultimately a database engine, not an analytics application. But it makes assembling cross channel data easy and supports analyses that are otherwise difficult. So it could provide a valuable platform for companies seeking to improve their customer experience analysis capabilities.

Friday, April 06, 2007

Advertisers Must Help Marketers To Build Relationships

Today’s New York Times reports that cable TV networks are balking at an eBay-built auction site to sell their advertising (“For Cable TV, No Interest in Selling Ads The eBay Way”, page C3, The New York Times, April 6, 2007). The networks’ justification is that many ads are now sold as part of larger packages, rather than simply on price. The article quotes Cabletelevision Advertising Bureau President Sean Cunningham as saying, “The grand majority is about idea-driven packages that have got multiple consumer touch points.”

Mr. Cunningham has access to more information than I do, but I somehow doubt that the “grand majority” of cable TV ads are part of multi-touch point packages. The very fact that, according to the article, the online exchange was sponsored by large advertisers including Hewlett-Packard, Home Depot and Intel who committed up to $50 million, suggests that those advertisers felt many of their purchases could be made outside such packages. Even an amateur cynic would suspect the networks’ real concern is that an auction would result in lower prices.

The article also mentions efforts by Google and DoubleClick to create online exchanges for advertising purchases, and suggests these may also face resistance from media companies. (Although the article doesn’t mention it, other reports have stated that Google is indeed having trouble gaining cooperation from radio stations.)

The obvious story line here is “new technology tries to reduce costs and the old guard resists”. But maybe things aren’t so simple. After all, integrated, multi-touch point marketing is a Good Thing in my personal universe, and I think in the customer experience management world in general. If an auction model does prevent more sophisticated marketing programs from happening, perhaps it really is a bad idea. Even though the core concept of the Customer Experience Matrix (delivering the highest value message possible for each communication opportunity) sounds very auction-like, it requires more information about those opportunities than would be available in a typical auction situation. If I may pontificate a bit (and who’s to stop me?), markets in general are only as good as the information known to their participants—so if closer buyer/seller relationships make information more accessible, they may actually result in a more efficient use of resources than a freer but less informed auction.

This doesn’t mean auctions are useless. Certainly eBay, Priceline and various price comparison sites work well for consumers who making one-time purchases (not necessarily of commodities) primarily on price. Even in the advertising world, price-driven purchases can work for campaigns that don’t require on-going relationships with customers (say, viewers of a particular TV show) or media (say, cross-promotion on TV, radio, Web and print venues with a common owner).

There’s a general theory in here somewhere, that the value of a communication opportunity can be increased by having more information available but only if the information is usable. In an advertising situation, it’s the medium owner (TV station, Web site, newspaper, list owner, etc.) who must provide the information and the advertiser who must be able to use it. For internal situations, the medium owner and advertiser are the same company but the separation between exposing and exploiting information is still relevant.. The general theory would also need to account for the fact that the same communication opportunity can have more value within the context of a relationship, or even within a particular contact stream, than outside of that context. This means creating a relationship actually creates a more valuable communication opportunity—a sort of “sweat equity” if you will—although this value only exists for the relationship owner.

Media owners will naturally want to share in the value created by their advertisers’ relationships. Their negotiating position is weak since no alternative buyer can benefit from the relationship (and thus, alternative buyers will not pay a premium for the same contact opportunity.) What media owners can do is to negotiate up front when they are giving advertisers the opportunity to start building the relationship. Really clever media owners would go a step further and make it easier for advertisers to build relationships, both by making more information available and by helping advertisers learn how to make use of it.

None of this really justifies the cable TV networks’ refusal to participate in advertising auctions. Advertisers who have relationship-based programs would not use the auction-based ad exchange even if it existed. Advertisers who can be more flexible in their purchases are the ones who stand to gain. If the cable networks reject the auction approach because they think they can make more money selling ads the old way, that’s their privilege. But I would respectfully suggest that they put their efforts into adding value to their media instead of the defending the status quo. Advertisers have too many other contact opportunities to pay a premium where none is justified.

Thursday, April 05, 2007

Channel-Specific Analytics Are Doomed: Doomed, I Tell You

Did you ever have one of those crazy dreams, not quite a nightmare, where unrelated things get mixed up together? I felt that way this morning when I was looking at the Web site for one of the mobile marketing systems and saw they had alliances with Web analytics vendors. That rang a bell, but it took a while for me to realize that I had been writing about consolidation in the Web marketing space separately from mobile marketing.

The confusion is compounded by my recent look at non-Web analytics system including ClickFox (which gathers interaction logs from call centers and other systems) and Skytide (which gathers all kinds of data; I haven’t written about it yet).

There’s an obvious connection between systems that gather interaction data and those that manage marketing messages. As the Omniture / TouchClarity hookup I mentioned yesterday illustrates, some of the vendors are themselves bringing the two together. It’s no surprise that this would happen for Web systems, which tend to be internally integrated but isolated from other media.

Of course, the Web should not be isolated, and the trend is in fact towards cross-channel integration. Does it make sense, then, for Web analytics vendors to integrate tightly with Web targeting systems? You can see why an analytics vendor would want to do it—as a revenue-generating line extension and a way to help clients who lack an existing targeting solution. But the vendors (and I’m sure Omniture recognizes this) must also make it easy to integrate their systems with any other targeting product. Otherwise, they risk losing sales to prospects who already have a targeting solution and don’t want to change it.

From a broader perspective, though, interaction data from many channels needs to be combined for marketers to do the best job of analysis and targeting. This can be done by physically copying the data into a traditional data warehouse or by using some sort of virtual or federated structure. What’s important is that data from many sources must come together into a single location, where it becomes accessible to many execution systems. In other words—am I beating a dead horse here? —you don’t want direct connections between single-channel source and execution systems, such as Web analytics to Web targeting.

This has technical implications. In the cross-channel scheme, the role of the analytics system is just to gather and reformat data so it can be presented to the central storage facility. The actual analysis would be done in the central system or by a cross-channel analysis system that draws from it. This means that products which combine data gathering and analysis, like current Web analytics systems, need to decouple those functions and build open interfaces to reconnect them. These interfaces would allow users to substitute other products on either side of the relationship. In addition, vendors with specialized data storage technologies might offer a storage component with interfaces at both ends, one to accept feeds from multiple data-gathering systems and the other to allow access by multiple analysis and targeting tools.

This is not an appealing proposition for many vendors. Breaking their systems into components opens them up to more competitors and risks each component appearing to be a commodity. It also eases switching costs, placing further pressure on prices. In general, as I’ve noted many times, vendors seek to expand their footprint and increase integration, not the other way around.

But vendors who specialize in systems for one channel will increasingly find themselves frozen out of multi-channel opportunities. There are already many products to provide multi-channel data store, analysis and targeting. Data gathering still tends to be channel-specific, but that won’t last as new channels become better understood.

In short, vendors who seek to remain channel specialists are likely to find their business shrinking over time. This may seem like bad news, but the sooner they begin to adjust to it, the better off they’ll ultimately be.

Wednesday, April 04, 2007

Acxiom Digital Buys Kefta for Web Page Personalization

It doesn’t take a crystal ball to foresee continuing consolidation among online marketing systems vendors. Today, email specialist Acxiom Digital announced it is purchasing Web targeting specialist Kefta. In February, the news was Web analytics vendor Omniture buying behavioral targeting vendor TouchClarity .

The Acxiom Digital / Kefta match up is somewhat more interesting because it combines email and Web channels. Omniture and TouchClarity were both primarily Web specialists. Of course, marketers are increasingly interested in synchronizing efforts in both online spheres, so we can expect more cross-channel acquisitions as vendors seek to meet this demand.

The Kefta acquisition is also interesting because Kefta targets with user-specified rules, rather than the self-adjusting statistical models used by TouchClarity, [X+1] and Certona . This suggests that what Acxiom Digital found really appealing was Kefta’s ability to render personalized Web pages, rather than its particular targeting technology. Much as I personally am fascinated by automated targeting solutions—and much as I suspect they can bring higher ROI than rules-based approaches—this probably reflects an accurate assessment that most marketers prefer the clarity of a rules-based approach to the mysteries of a self-adjusting model. Certainly the rules-based approach is closer to the targeting methods used in Acxiom Digital’s e-mail personalization system. So the two vendors are philosophically compatible, regardless of what it will take to integrate their technology.

Note that, if I’m correct that the real appeal of Kefta is in its page personalization technology, this supports my thesis about the value of Web testing vendors (see Are Multi-Variate Testing Systems Under-Priced?)

From a user’s viewpoint, the important question about the Kefta acquisition is how much easier it will make it for marketers to tightly integrate cross-channel customer experiences. Frankly, I wouldn’t expect much change any time soon. Judging by the results of other mergers, it will take quite a bit of effort—perhaps even a complete rewrite—to run both delivery systems from a common campaign manager. Moreover, the real challenge with integrating Web and email campaigns is identifying when a particular Web visitor is the same person as a known email account. This linking is done outside of the Web and email delivery systems, so merging two delivery systems into one won’t really make it any easier.

And let’s not forget that Web and email are not the only online channels. A truly integrated online marketing system must also include mobile. That consolidation has yet to begin.

Tuesday, April 03, 2007

Differences Among Mobile Marketing Systems

You may have thought from last Friday’s post that I had merely gathered the names of mobile marketing software vendors. Au contraire. That list was based on a close parsing of the relevant Web sites. (And if it’s on the Web, it might be true, right?) Now that I’ve had some time to sift through the materials I assembled, it’s possible to make some more precise observations about what differentiates the different systems.

1. Voting vs. Ad Serving: this seems to be the Great Divide. Of the seven products that seem to be serious marketing systems (as opposed to simple message blasters), four mention voting and related applications (sweepstakes, contests, etc.) and three mention mobile ad serving, but none mention both. I realize that ad serving is a pretty specialized skill. But I don’t see why it should conflict with voting and similar interactions, so perhaps some vendors do both and just don’t mention it. (Note: after posting this, I learned that at one of the ad serving vendors, Enpocket, indeed does both. The others may as well.) The voting vendors are Flytxt, Netcom Consulting, Kodime and MessageBuzz, while the ad serving vendors are Velti, Knotice and Enpocket. It’s also worth noting that Ad Infuse, which is not on my list of seven, is a mobile ad serving specialist.

2. Upload user content. User-provided content might be considered shorthand for all the Web 2.0 community features. You need not just to upload it, but provide a way for others to search for and view it. Only Kodime and Enpocket make a point of mentioning this—which, again, doesn’t mean others don’t do it.

3. Download paid content. This could be anything from simple ringtones to coupons to full e-commerce, so there is probably a broad range of functionality among the five vendors who mention it (Netcom Consulting, Kodime, MessageBuzz, Velti and Enpocket). If you’re interested in this, also look at Mobiqua, a mobile coupon and ticketing specialist.

4. Interactive dialogues. This could mean automated interactions (Kodime), connecting to live humans (Flytxt, Velti), or enabling user-to-user conversations (Enpocket). These are wildly different applications, so this category also takes more digging depending on your exact needs.

So much for differences. Everybody is hosted, although Velti seems to offer on-premise as an option. Everybody can broadcast messages, usually in multiple formats (SMS, MMS, video, games, Web pages, portals, etc.) and receive responses. All maintain some form of customer database to capture permissions and build profiles. Most explicitly mention campaign management and the rest probably have some kind of campaigns too. But Velti, Knotice, and Enpocket describe more sophisticated campaign administration, with workflow and targeting. Those are the same three that do ad serving, which make sense if you think about it.

If you’re looking for truly unique claims, Enpocket is still the only one I’ve seen that describes an automated scoring system to predict customer behavior, and Knotice is the only one positioning itself as multichannel in the sense of including Web and email as well as mobile.

Let me stress yet again that these are just impressions based on vendor Web sites. If you have a specific requirement, it’s definitely best to query all the vendors directly to see whether (and, more importantly, how) they support it.

Monday, April 02, 2007

Deltalytics' Lloyd Merriam Comments on LTV

My friend Lloyd Merriam has left a thoughtful comment on last week's post about Lifetime Value. It's worth treating as a post of its own. Here's Lloyd:

I completely agree that customer lifetime value (LTV) is the single metric against which all strategic business decisions should be evaluated. Although non-trivial, determining the current value of a customer isn’t particularly challenging. Calculating future LTV – which, as you know, is what really matters – is neither simple nor straight forward. LTV is driven by lifetime duration (LTD) and future purchases. How much a customer is likely to spend (on average per purchase), how often they’ll do so, and for how long will together determine their LTV. To the extent that these may have been poorly estimated, the accuracy of any subsequent analyses will be compromised.

That huge challenge aside, what’s even more difficult is to qualify and quantify the relationship between discreet business decisions (primarily strategic but sometimes tactical) and their back-end results. In other words, assessing which business drivers had what direct and specific impact on LTV. For example, even if we can reasonably estimate that LTV has increased, say, 15% overall, how do we tie this increase back to a particular driver when, in fact, many may be at work? Was it our redesigned website, product line expansion, more restrictive (or liberal) returns policies, or new factory that is primarily responsible? Whether a particular strategic driver had a positive or negative affect is difficult enough to discern. Assigning its quantitative score is typically next to impossible.

Therefore, while it’s perfectly valid to assert that it’s the impact of a given business driver on customer lifetime value that’s most important, it’s just as important to recognize that leveraging this principle is exceedingly difficult due to the sheer complexity of the numerous interactions taking place – especially internal, but also external as well (e.g. the actions of competitors).

Our approach (which we call “Deltalytics”) is to periodically estimate the average customer lifetime duration, the average customer spend, and subsequently track their change over time to expose trends that will ultimately govern future business performance. If we know that both are increasing, for example, it’s safe to say that the business is trending upwards. The rate at which this is occurring can, of course, be used to make specific predictions about future growth (or decline, as the case may be).

But these two metrics are just the tip of the iceberg. Others that can and should be used to gauge business performance include:

(1) Rate of new customer acquisition (and, conversely, attrition)

(2) Customer distribution by recency (the greater the proportion of recent buyers, the better the business will perform)

(3) Average latency (the sooner customers place subsequent orders the better)

(4) Customer distribution by frequency (the higher the better, although not nearly as predictive as recency)

(5) Multi-buyer conversion rate (the percentage of 1X buyers who become multi-buyers)

(6) Customer re-order rate by recency (the ratio of repeat buyers as a function of their recency segment, e.g. <30 days, 30-60 days, etc.)

(7) Customer reactivation rate (customers flagged as having lapsed but eventually reordered)

(8) RF Delta (the change in population density over time at the intersection of recency and frequency)

Although quite useful in themselves, the greater utility in each of the above metrics lies in evaluating and forecasting their deltas over time. Change, and the rate thereof, is far more meaningful and insightful in this context than the more common “static” approaches to predictive analytics.

Getting back to business drivers (and measuring their impact on the bottom line, viz. LTV) one must concede that no single solution or approach can effectively gauge them all. At some point, a seat-of-the-pants determination must be made based upon relevant, albeit inherently incomplete, data. Tests can be conducted to measure the impact of, say, introducing a new product line or instituting wide scale changes in pricing. But even then, other contributing factors that cannot be controlled for, and are likely to cloud the results, must be acknowledged (such as a new website, outsourcing the call center, and so on).

In a perfect world, strategic changes would be implemented in a linear and mostly piecemeal fashion to ensure that consistent and reliable analyses can be made. Because this is so rarely possible, however, some compromises must be made in terms of measuring and forecasting the impact of such changes.

It is our position that an optimal way to approach the problem is to analyze trends amongst the aforementioned business performance measures – more specifically, their change (and rate thereof) over time, and subsequently tying these back, as best we can, to their underlying business drivers. This, unfortunately, is much easier said than done.

Lloyd Merriam
lmerriam@deltalytics.com