fbpx

The Australian penchant for video streaming

This week in eBuzz from Michael Carney’s Marketing Week:

  • The Ozzie’s obsession with online videos:
  • ipredict top of the pollsters

The Ozzie’s obsession with online videos

We’re not sure what it says about the Australian psyche—or perhaps their free-to-air TV fare —but it seems that our Trans-Tasman cuzzies are enthusiastic consumers of online video content, with 41 percent of Internet users streaming or downloading videos and more than one in five (21%) reporting to be doing so on a regular basis, according to findings released earlier this week from Nielsen’s 2010 Internet & Technology Report. Comparatively, just a quarter of Internet users in the UK are streaming or downloading video.

Video downloading and streaming in Australia is expected to increase in 2010—44 percent of regular online video users say they’ll watch more video streams, while one third (33 percent) intend to download more online video content.

In a hint of things to come, the Internet & Technology Report showed that the most popular video downloads and streams were full length episodes of TV shows, amateur video clips such as those found on YouTube, music videos and feature-length movies.


In other words, this Catch Up TV concept is catching on, especially as more and more of the hot TV shows migrate online.

Video advertising support remains more problematic. Despite the increasing adoption of video content by users, advertisers have been less supportive. According to Paul Fisher, CEO, IAB Australia, online video advertising revenue accounts for just four percent of total online display revenues.

Nielsen has been in discussion with key Australian industry bodies and clients about the viability of a Video Market Intelligence service, similar to the company’s existing Market Intelligence service. It’s a step in the right direction, but it’s nowhere near as ambitious as that planned by Nielsen in the US.

Across the globe, in the land of hope and glory, Nielsen intends to start making available data that take into account viewing of commercials that run in a particular show, no matter whether they are seen online or on TV. The data will be made available for evaluation starting this September and are intended to become the basis for ad negotiations in February 2011.

Yes, combined viewing data. There is a catch. For Nielsen to be able to provide the commercial rating, shows seen online will have to have the same group of commercials that run on TV. If this system were adopted en masse — and it’s not clear that it would be—online viewing might be crammed just as full of commercials as the more traditional TV-watching experience.

“That in itself is a challenge,” Rino Scanzoni, chief investment officer at WPP’s Group M, told Advertising Age. “The consumer has been used to getting [online video]with either limited commercial interruption or no commercial interruption.”

Indeed, viewing programs on Hulu, the online video site owned by NBC Universal, News Corp. and Walt Disney, means encountering significantly fewer ads than one would see watching TV. And Disney’s ABC.com has met with some success by running ABC shows with just a few ads, often from a single advertiser.

But many TV executives say these methods don’t bring much, if any, profit—and therefore cannot continue.

So is the full-ad-content-online strategy viable?

Consumers already say they hate “pre-rolls” (solus ads run before video content). Yet, according to TubeMogul, 84% of online video viewers who are served a 30-second pre-roll ad before a piece of video content watch the entire ad without clicking away.

So why does the drum beat of “pre roll is dead” roll on? Randy Kilgore, chief revenue officer for Tremor Media, came up with four main reasons:

  1. A lot of content stinks. Pundits use drop-off figures to cite consumer dislike.  What they fail to consider is very often people click on videos on a whim.  They see a brief description and they commit lightly.  If there’s an ad, they weigh the amount of time they need to invest versus what they might get from the content and often bail.  Can’t really blame them?  But if you measured only professional content, you’d find far different results.
  2. Targeting is nascent. Pre-rolls are placed based on site level or section level content descriptions (which implies demographic targeting). That’s fine until the technology evolves—which it is, rapidly—but if I see an ad for a women’s product on a news site and I’m only mildly interested in the content, I’m out.
  3. Video search stinks. People aren’t finding what they want, they’re finding what happens to be in front of them. Again, lack of commitment to the content increases impatience with the advertising. Not a format issue, a technology issue.
  4. Herd mentality still rules. I guarantee you, people who bash pre-roll haven’t stopped to think about the issues—it’s just easy/lazy to go along with the crowd. Every time I hear a speaker bash pre-roll, I approach them after the session. Every single time they tell me I’m right, pre-roll is here to stay. Lesson learned: Filter what you hear.

In other words, online video advertising will really work only if it lives up to the digital promise. That probably means that in the long term the Nielsen combined data report won’t fly. If we get better online video advertising execution online instead, we’re willing to make the sacrifice.

ipredict top of the pollsters

NZ Prediction Market operators iPredict are understandably upbeat over new research out of the University of Michigan which shows that iPredict significantly outperformed pollsters in the 2008 election. iPredict was the second-most accurate forecaster across the 2008 election campaign, beating all but one pollster and all ‘polls of polls’.

What are prediction markets? Cue Wikipedia:

[They] are speculative markets created for the purpose of making predictions. Assets are created whose final cash value is tied to a particular event (e.g. will the next US president be a Republican) or parameter (e.g., total sales next quarter). The current market prices can then be interpreted as predictions of the probability of the event or the expected value of the parameter. Prediction markets are thus structured as betting exchanges, without any risk for the bookmaker.

To put it another way: prediction markets involve betting on the outcome of current events, with or without real money being involved.

As James Surowiecki argued in his 2004 book The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, the combined wisdom of a diverse group of individuals leads to decisions that are often better than could have been made by any single member of the group.

Do we get smarter if we’re betting real money?

Well, at least we think more about our decisions when we have real skin in the game.

So why did iPredict (which does involve real money) do better than most pollsters? iPredict CEO Matt Burgess offers up some suggestions on his blog:

Prediction markets have consistently beaten polls for twenty years in the US, and in 2008 that trend continued in New Zealand. There are a number of reasons for this:

  1. Pollsters ask the wrong question. Pollsters do not generally ask about vote share on polling day. They ask: if the election were tomorrow, who would you vote for. That’s a question that introduces bias and interferes with an election day forecast: elections have milestones, and different parties receive varying amounts of coverage during a campaign’s course. In the US, convention bounce is a well-known phenomenon in polling. By asking about an election tomorrow, polling results are being shoe-horned into answering a different question.
  2. Pollsters results are sensitive to sampling bias. Pollsters have to be very careful about the composition of their samples, because their results strongly depend on it, and have secret correction formulas that weight the responses they receive to correct for composition problems that naturally arise in sampling. McGirr and Salmond confirm that pollsters consistently lean one way or the other on the political spectrum, probably this is down to the subtle ways composition and corrections affect the final result. These biases, while reasonably consistent, add noise and make it harder to know what will happen on the day that counts.
  3. Pollsters are facing increasing difficulty in sampling. Most pollsters sample by phone and a non-random sample of people do not answer their phone. Working out who those people are and correcting for their absence is not trivial. Furthermore, the number of people who answer their phone and who are willing to answer a phone survey is declining, further increasing sampling difficulties since it is, as always, a non-random selection of people declining to answer.
  4. Pollsters face lags. Polls aren’t usually conducted in a night, and may occur over a period of up to two weeks—an eternity in an election campaign. Add more time for the numbers to be crunched and the results published.

None of this is to say what polling companies do isn’t valuable—absolutely it is. But if you want the most accurate and up to date information then these factors count against polls.

Implications for Marketers?

The New York Times covered some interesting business uses for prediction markets in an article on the topic in April 2008:

Companies like the InterContinental Hotels Group, General Electric and Hewlett-Packard are using prediction markets to try to improve forecasting, reduce risk and accelerate innovation by tapping into the collective wisdom of the work force.

Like blogs and wikis, prediction markets can spur communication and collaboration within a company. Yet they add rigorous measurement to business forecasts, like estimating the sales of a new product or the chances that a project will be finished on time.

Corporate prediction markets work like this: Employees, and potentially outsiders, make their wagers over the Internet using virtual currency, betting anonymously. They bet on what they think will actually happen, not what they hope will happen or what the boss wants. The payoff for the most accurate players is typically a modest prize, cash or an iPod.

The early results are encouraging. “The potential is that prediction markets may be the thing that enables a big company to act more like a small, nimble company again,” said Jeffrey Severts, a vice president who oversees prediction markets at Best Buy, the electronics retailer.

The store chain has experimented with prediction markets on everything from demand for digital set-top boxes to store-opening dates. For example, Mr. Severts said that in the fall of 2006, the prices in a prediction market on whether a new store in Shanghai would open on time—in December 2006—dropped sharply from $80 a share into the $40–$50 range. Players made yes-no bets, and the virtual dollar drop reflected increasing doubt that the store would open on time.

Indeed, Best Buy’s first store in China opened late, in January 2007, but the warning signs from the prediction market helped prevent further slippage.

The jury is still out on whether whether such uses of prediction markets are mainly an innovative way to gather information from employees or a font of reliable answers. At any rate, they’re worth considering as another weapon in your knowledge-gathering arsenal.

  • And make sure you check out the Marketing Rebooted eCourses. As the name might suggest, is all about the challenges facing marketers as we head into the second decade of the new millennium. Under that name, we’ll be bringing you a series of eCourses for marketers trained in traditional advertising and marketing methods, who know a little about new generation marketing tools and techniques—but not enough. The first installment will be on social media. And there’s a rundown of the course content and what it will cover here.

Comments are closed.