Showing posts with label content labeling. Show all posts
Showing posts with label content labeling. Show all posts

May 14, 2022

Contextual Targeting Using Fox News’ ATLAS

As the upfront heats up, Fox News Media has prepared a technological solution to match its expanding demographic appeal – the introduction of ATLAS which analyzes both the spoken word and video content to facilitate contextual targeting for ads. 

Fox News, in general, is on a roll. Not only is it a news leader for years in the key demos, it’s now reaching a new level of success in certain other dayparts. According to Jeff Collins, Executive Vice President Ad Sales, Fox News now has the ratings and scale of broadcast shows like The View and GMA and, as such, is reaching a much broader audience. Performance highlights include the comedy show  Gutfeld, their new late night program which has out-performed other late night shows, including Colbert, on occasion and the news program The Five which airs at 5P, is #1 in cable news, outpacing primetime in total viewers.

Now, with the introduction of ATLAS, FOX News Media is able to capitalize on those demos for advertisers. “ATLAS is different than a lot of current solutions that really just analyze the spoken word - transcripts of the video content without the additional context of what's up on the screen,” Collins explained.

ATLAS uses AI and machine learning to better understand the context of the video content in real time.  “It looks on a second by second basis. Within a news environment, topics change fairly rapidly. It could go from lifestyle to news or weather,” he explained. ATLAS allows for fast changing topics, allowing advertisers the ability to align their brands closer to preferred content in a more granular manner. He noted that, especially in news, it is possible to watch a talking head with a video in the background. “Oftentimes the video might not be aligned with the spoken words the anchor is saying. We felt it was very important as a news organization to be able to arm our clients with proprietary tools that got a lot more granular than the current solutions.”

Privacy should not be a concern with ATLAS. Collins stated that, “The ability to target consumers is going to be really severely limited due to privacy regulations. This makes contextual targeting that much more important as there's all these limitations put on targeting actual consumers.” He added that, “We're not collecting any personally identifiable information. This is purely collecting data on the overall content and context, so I think that's an important thing to note given privacy regulations and restrictions and ad targeting these days.”

The way ATLAS works is by, “grading everything,” using keywords to better match an advertisement through the spoken word and visuals. Additionally ATLAS can tag according to sentiment through machine learning. “There's a lot more selection of new types of content and a tool like ATLAS allows our clients to be able to align their brands closer to our diversity of content and place the advertisers messaging as close to the types of content that they would want to be associated with,” he stated.

According to Collins, “There have been countless studies on the importance of aligning and being around the right context and how that increases upper funnel and also lower funnel metrics, brand metrics and performance. We're partnering with some of our clients now to be able to utilize ATLAS and run it alongside some of their current solutions, looking at the variances between what ATLAS is grading and what the current solutions are.” It is then possible to examine those variances and get a better understanding of why ATLAS is grading things a certain way. “It's providing more granular targeting capabilities,” he added.

One of the deliverables ATLAS can provide is attribution, “to be able to say okay, is ATLAS actually driving better real world outcomes for our clients? Is the contextual alignment actually helping to drive more site visitation or in store traffic? So we'll do ATLAS targeting for a particular campaign and then on the back end of that do attribution studies which look at the amount of site visitation that that drove in store traffic,” he posited.

ATLAS’ main advantage is its ability to review and grade content quickly. “The current solutions in the market look at transcripts at about 2000 characters per hour. ATLAS looks at closer to 60,000 characters per hour because we're picking up all of the background video as well,” he explained. This is especially important in news because of all of the variations in news content that tend to run in a 10 or 15 minute block. “Being able to actually slice that out is something that I think is an important distinction for ATLAS versus what other tools are currently offer,” he said. And these solutions are customizable. “We don't take a one size fits all approach,” he stated and added, “It has to come down to what our clients are comfortable with. We're not grading our own homework. We're allowing a third party to be able to judge the results and to judge it against the entirety of an advertiser's campaign, not just what's happening within our ecosystem.”

The result, according to Collins, “has been overwhelmingly positive. We are engaging with every major holding company and we're running tests now with most of them. Everybody is looking for better solutions in the space. They're happy to see us taking a leadership position in this area,” he concluded.

 

This article first appeared in www.MediaVillage.com

Artwork by Charlene Weisler

Jul 30, 2019

Why is Content Labeling Taking So Long? EIDR’s Will Kreth Explains.


In my decade as a data consultant, I’ve become a big proponent of content labeling to help facilitate the linking of content across platforms and devices.  First spearheaded by Jane Clarke, CIMM’s CEO and Manager Director in 2009, the labeling initiative for both ads and content promised to result in a type of universal, industry-standard UPC code. But it has taken much longer than I personally expected for the television marketplace. Not only are we are still not there, but it seems like there has been limited progress while the number and diversity of platforms and devices proliferate and the global footprint expands.

So I sat down with Will Kreth, Executive Director, EIDR, to try to understand just what the challenges and obstacles are that are keeping us from what seems to be a no-brainer – a universally accepted labeling protocol so that every creator gets the full credit of all of the views for their content. I wanted to understand what’s going on and why it’s taking so damn long.
To some in the industry, according to Kreth, content labeling can involve metadata or behavioral tagging. But for him, content labeling takes on a much broader definition.

Charlene Weisler: What is your definition of content labeling?

Will Kreth: We think of it as unique identification of content to help the media and entertainment supply chain, to help workflows, and to help the life-cycle of a title. Content identification (through unique, machine-readable IDs) helps all of the different players / actors in the ecosystem - in the existing value chain, and also in the aspirational, as yet to-be-realized value chain.

Weisler: Who is doing it now and who is not doing it now?

Kreth: We have been strong in the film industry – with now 95% to 100% coverage at first theatrical window for all movies from the top six  (now five) Hollywood studios. However, television is a major gap for us – in that we have not cracked the code on the motivations for the networks to look at open standard, unique IDs as a way to improve audience measurement, generate incremental revenue, and/or lower costs significantly.  

Weisler: Why would TV not see it while the film industry does see it?

Kreth: For years and years, television didn’t even operate with external identifiers. Content was shared – the satellite and cable operators and broadcast networks just used internal IDs – then published spreadsheets, Word documents or PDFs of program schedule information for print TV listings and Electronic Program Guides (EPGs). So, television in the traditional world of the last 30 to 40+ years was very linear. Then, On Demand and DVRs came – and the VOD platforms developed by cable had an effort around creating VOD metadata – because they realized that they would be the ones sending the assets files out to the field to local cable head-ends. They had to describe them well so they could be ingested into broadcast automation or play-out systems. 

Weisler: How is the television landscape in content IDs structured?

Kreth: The incumbents were the duopoly between Rovi (now TiVo) and TMS (the former Tribune Media Services) which absorbed and then rebranded itself as Gracenote, and now Gracenote is part of Nielsen. The TMS ID has the lion’s share of usage in North American television broadcasting metadata. It’s the unique ID that is the incumbent. It took a lot of years to get there through competition to gain marketshare. Gracenote has become the dominant player in unique IDs in the United States, but not globally. So there is a vertically integrated play that the TMS ID is a part of that is now required by Nielsen. So you now have a unique ID for the majority of U.S. paid TV viewing tied to the ratings system for the majority of U.S. homes. Through hard work and market dominance, the TMS ID (which went from Tribune, then Gracenote and now Nielsen), has achieved somewhat of a winner-take-all-effect. 

But there are some hold-outs. Some use TiVo IDs, some don’t use IDs because they have decided they are not at the end point of the distribution chain,  and some use IDs from RedBee (formerly FYI Television) or others.  Meanwhile, some sources, like electronic program guides in all of the major set top box manufacturers, had to be standardized. That process took years to get to a certain level of quality and there are still a lot of gaps and mismatched metadata and errors. Also, the digital video recorder pushed a lot of folks towards standardization – because it insisted on the notion that if you are going to record a program – you need to know exactly when it starts and stops and what the program is. From a consumer’s point of view – it would be unacceptable if you got the wrong program at the wrong time. There was no room for guesswork. And that pushed folks towards the effort for standardization around television metadata.

Weisler: What about EIDR today?

Kreth: It’s a different world than just 10, even 5 years ago. There’s greater complexity and new business models beyond just rated linear and on-demand television (or legacy DVRs).  The challenges of multi-platform distribution, streaming, and international OTT businesses are helping convince doubters that the world could  do very well by adopting  EIDR’s open standard to precisely identify the thing itself (content labeling)  no matter where it plays or what device it plays on. Data collection and analytics are no longer a “nice to have” – they are mission critical.   With EIDR’s open ID standard becoming ubiquitous in TV,  costs would go down, innovation would go up, competition would thrive in a world where title level ID metadata is shared as a global standard - and not held in any one company’s proprietary ecosystem.

Weisler: So why is it taking so long?

Kreth: If you are a broadcaster, you would have been working for years and years to get these systems set up, to get the TMS IDs flowing, and to get them into broadcast automation systems. Switching to another title ID or even supporting, side by side, another title ID requires capital or operating expenditures and that are not often in the budget of the major networks. Unlike film – which did not have an existing solution, they had independent data services and when they saw that there could be something like a universal ISBN number for film and TV - they said, ‘sign us up’ and incubated the data model structure at Movie Labs (the research and development arm of the US film industry). So, we have work to do in television. The good news is:  We’re making great inroads with TV broadcasters in Sweden, the Nordics, the UK and EU.  Our motto has always been – “Lots of IDs, Low Cost”. We want to make acquiring and tagging labeling TV (and all video) content with unique EIDR IDs as easy, painless and inexpensive as possible.

With television – we realized we were up against existing workflows and lifecycles of how content was flowing and with all of the vendors, hardware manufacturers, suppliers and operations systems there was no one place to go. We’re starting to hear from vendors in the television industry that they’re ready to support EIDR in their software and toolsets. The world would change in an instant if one of the large US MVPDs said ‘we require EIDR IDs.’  And we see signs that may be happening with at least one of the major US MVPD’s, especially due to the demonstrable need for unique IDs in measuring a multitude of data points and KPIs, on multiple platforms, with a myriad of program titles. 

Weisler: Will, knowing what you know, what do you think the timeline is to get more than critical mass for television?

Kreth: There is a project we call EIDR 2020, where we’re pushing to start to see the major US TV distributors and vendors support EIDR alongside their existing workflows or with their existing IDs. With a sunrise period for EIDR IDs ubiquity in 2020 – it starts to create that catalyst, the critical mass to move people off of stasis and inertia and toward embracing and extending their platforms and their toolsets to support EIDR. Next year will be our tenth year - so there will be nothing greater than to see an industry-wide EIDR 2020 sunrise begin in television.


This article first appeared in TVREV.

Feb 26, 2019

Content Labeling and Data Transparency Initiatives Updates

When it comes to cross platform measurement, if you can’t identify a piece of content, you can’t measure it and if you can’t measure it you can’t monetize it. Further, if you don’t know what is in the dataset you’re using, the results may be suspect. That is why two major data labeling initiatives (content labeling and data transparency), recently showcased at the CIMM Conference, are poised to take cross-media measurement to the next level.

Ad-ID and EIDR for Content
Harold Geller, Executive Director, Ad-ID and Will Kreth, Executive Director, EIDR have been instrumental in creating industry standard labels that enable the seamless tracking of both ads and programming across all platforms and devices. Geller explained that now over 400 advertisers are using Ad-ID for their advertisements and in order to make the cost of entry more affordable, Ad-ID prefixes are now free to registrants. Kreth noted that programming labeling has had a positive effectthat reduces friction. EIDR has seen rapid growth recently, reaching 2 million content records as of the end of 2018.


Data Transparency Label for Data
According to David Kohl, President and CEO, TrustX, audience and identity data are the foundation for billions of dollars in marketing and media spending. “But not all data is created equal,” he warned. “It is important to create a label that tells us exactly what is inside the data.”

To that end, a data transparency label has been developed that looks like an ingredients label found on food packages. This label enables all users – both advertisers and programmers – to know what type of data is inside. This label will help answer questions such as: How did data get created? Where did it come from? Who is the owner? What audience segments are used and how was the segment constructed? Was it modeled and where did the data come from? Is it household, device or individual, an id, a cookie, a set top box, zip or address?

Both content labeling and data transparency labels are designed to provide a level of trust for the industry. For Kohl, this is just the beginning. “We are on a journey,” he explained. “We are looking for industry feedback and plan to evolve the label over time.”

This article first appeared in Cynopsis.