UpYourRatings.com home pageAbout Steve CaseyHow SCR gives you the best solutions for testing your music libraryHow to contact Steve Casey  
What makes Steve Casey the best choice as a consultant for your programming researchSteve Casey invented call-out music research. Here's how we can help youSharing ideas with you about radio programming and researchFree offers from Steve Casey ResearchThis month's issue of Up Your Ratings  
   
RadioStar, Romania (national)  

Research Issues

Caution: Research is a Loaded Gun

The following thoughts are adapted from remarks made by Steve Casey, President of Steve Casey Research while participating on the panel "Research - Programming Tool or Loaded Gun" at the 1999 NAB Radio Show in Orlando.

I've been fortunate enough to be exposed to maybe 10,000 callout reports, audience tracking surveys, library music tests, Arbitron analysis and market studies over the last 30 years. I've been the client. I've been the consultant. I've been the researcher. Perhaps the most important thing I can do to be helpful is to share with you my experience. Over the years, we have found some ways that, in each area of research, the tools can be very valuable for programming. Of course we've also learned the hard way that you can shoot yourself in the foot with them. So I thought I'd just share some of the highlights of what we think we've learned so you can do either one.

Let's start with music research and callout. If you're programming more than about one quarter of your music as current music, then certainly it's an important programming tool. The key to making callout a valuable tool is to think "short-term feedback loop." Listeners react to what you did last week. You make changes in the music to keep them happy during the upcoming week. This suggests a fairly tight sample.

There is a strong temptation for research companies to go wide to keep your costs under control since a tight sample usually involves a lower incidence rate and more hours on the phones. Low cost is a valid concern. But the solution must not violate the "short-term feedback loop" principle.

One approach to keeping costs under control and which you can use to improve your sampling is to do the calls yourself for a cost of $7-$8 per hour as opposed to the $20-$30 per hour charged by an outside company. Caution: if you do, you may introduce another way to shoot yourself. To accomplish that, simply treat your research staff less professionally than you would your sales or your jock staff.

The bottom line for stations programming currents is that callout research is the single most important research investment you can make and it's worth trying to get it right. As an industry, this is not our proudest moment in this area. Fewer programmers today know how to plan, manage and use the results from an in-the-station research effort. Because the essence of a program director's job is to monitor the audience appreciation for programming and to make appropriate modifications, it is hard to understand why so many today don't feel it is a part of a radio station's core competency.

Library music testing is valuable because tastes in music shift, songs burn out, competitors introduce new music, and artists fall in and out of favor. Your recurrent and gold library is the foundation that exists so you don't have to play your currents every 37 minutes. It is also the 'glue' that links listeners together through the body of music that they share appreciation for.

There's a lot of controversy over library music testing lately, particularly in the area of gathering data. What methodology should you use? Any of the methodology that people have come up with has got some strong points. There are a few things that you need to get right. The first one is a matter of environment. There are about six keys that must occur for the respondents:

  1. The respondents must understand easily what to do and how to do it.
  2. The respondents need to be able to focus on the task without distraction.
  3. The respondents need to be able to express how they feel - their opinions - clearly. That would include burn, familiarity, level of passion, and it probably precludes a really complicated questionnaire.
  4. The respondents must be able to enjoy themselves.
  5. The respondents must not become fatigued.
  6. The respondents must be able to embrace the survey and their participation in it as a serious, important thing.


If these things happen, the results will be OK. The big thing here is not to confuse the respondents.

The second issue is one of sample. This is a mirror that you'll be holding up for three, six, maybe twelve months. You want a very good sample of your most important customers and then, past that, sample as you can afford.

The third issue has to do with the analysis. We've found over time that you must pay attention to the fact that songs are not isolated islands. They work together to create some kind of programming vision for the radio station. So to shoot yourself in the foot the most effective thing you could do is bring the wrong people into the room. If we bring the right people in it's going to be very difficult to confuse them so much that they say that they like a song that they don't really like. With the tiny sample size we're dealing with, we've got to have the right people.

Another critical issue, in terms of the analysis, is that you've got to make sure that you fit the AMT results into the programming vision. The programming vision is often the result of a very expensive market study so you don't want your AMT undoing all of the work you just invested in.

The reality that people don't like everything on our station is reflected in their responses to the auditorium test or weekly music survey. And fortunately for us, the differences between people aren't random. People who feel some way about a Madonna song may be far more likely to have the same feeling about a Paula Abdul cut than something by Genesis or Boston. This is an obvious example. But it illustrates a point: If you are a hot AC, and you just played a Madonna cut, you want to follow it up by something else in your library that is likely to be enjoyed by a different group of listeners than the people who just enjoyed the Madonna cut. In the example above, you would be better off with a Genesis cut than with one from Paula Abdul.

But the above is only an example. Your audience is unique, if only in terms of how much they have been exposed to different songs. That uniqueness is one of the reasons you bother to do music testing, rather than rely solely on national averages. But if your listeners who like Madonna also like Paula Abdul, and if those who don't like either tend to like Genesis and Steve Winwood, then they will have told you that through their opinions on the music test. Positioning analysis, such as my own 'Variety Control' reveals the realities of song clusters in your music test. Armed with this information, your library research can now play a new, expanded role in helping you program the station.

The research can now help you establish boundaries for the station. It can help you see which songs are not only popular, but are also consensus cuts, in line with other songs that appeal to your core audience. And you can see relationships that, if exploited in your scheduling, will give you song to song balance control never before possible.

Fortunately, these days, just as Steve Casey Research provides Variety Control Positioning Analysis, most of the major research companies are providing some kind of positioning analysis. Given the importance of the recurrent and gold library for most stations, you should avoid using any approach that does not provide you with the ability to position the music to match your programming vision. You should be as comfortable with that as you are with the mean scores.

Over the years, we have found some ways that, in each area of research, the tools can be very valuable for programming. Of course we've also learned the hard way that you can shoot yourself in the foot with them.

Above, I shared some key things we have learned about music research. Below, we'll look at some of the other kinds of programming research being conducted by radio stations.

 

Arbitron analysis is another tool and it's not been used as much these days as it was back in the 1980's. What is odd about that is how much more difficult it was back then, because we did not have the computer tools we have today. One of the effects of consolidation, or perhaps it is coincidence, is that fewer programmers understand, in depth, the behavior of their audience.

There are some pretty good tools now and Arbitron itself is bringing out new analysis tools. The fact is that the Arbitron survey is the only behavioral as opposed to attitudinal survey that we do with the listeners. Just what are they doing minute by minute?

What about their behavior caused our results to come out the way they did? To make Arbitron analysis an important tool it's a matter of digging and asking lots of questions whether the tool is Arbitron's Maximizer or PD Advantage or InstantREPLAY from Steve Casey Research. Behavior like what time listeners arrive at work is important to know. Your morning show may talk to the audience differently when they're alone in their car than when they're sharing the radio in the work place.

A few other things that you can learn about diary keeper behavior:

  • Hot Zips. This information can target station promotion locations, personal appearances, the retail sales effort, billboards, and other marketing.
  • Direct Tune-Out. Where? When?
    Who is your competition, quarter-hour by quarter-hour? Break down competitive listening by location.
  • Weekly: Diary count, TSL, and listening volume on a week by week basis
  • Location: See how the audience flows between home, in-car, and at work listening locations to best time your news, traffic, content, and overall mood.
  • Peaks and Valleys: Learn exactly which quarter-hours in every day part listeners are most likely to record radio listening. Fine-tune your commercial load to keep these key quarter-hours as clean as possible.
  • Key Listeners: Calculate reach and frequency for your heavier (greater than 4 hours per day, for example) Quintile 4 and 5 listeners, to plan rotation schedules that lower perceived repetition. Learn when you will have to retest and/or freshen each category.
  • Loyalty Problems: Look for any quarter-hour with a low Programming Efficiency Rating. PER is the percent of time with radio that you kept your listeners at home, listening to you. Anything below 20% is a serious problem. This is where you must turn your attention. 20%-25% should cause real concern. Anything over 30% is performing well. Over 35%, not to worry!

The best way to shoot yourself in the foot with any kind of ratings analysis is to make a big change without a second opinion. This is particularly true if you are basing your decision solely on the information printed Arbitron report. It would be better to dig into the diaries to understand what diary keeper behavior led to those numbers.

We all know that with Arbitron measurement, because of the sample size bounce, any one answer could be wrong. This is not an Arbitron problem. It is the nature of a survey. The second opinion we use could be another Arbitron. It might be a previous Arbitron that we study to see if a trend was developing. It could be a perceptual study you do to try to discover some feelings and opinions by our listeners that support it.

This is another kind of research that has proven to be a valuable tool. It can be a check on unexpected Arbitron results.

Audience Tracking Studies are often done as part of callout research where your find out each week how many people listen to you and which station they listen to most. Our experience has been that the most valuable use of that tool is not so much the weekly information because it too is subject to a lot of bounce. It is as a second opinion in situations where we get a bad Arbitrend but because we had this tool in place we could go back and re-interview people we talked to three months earlier and look for shifts.

If a bad result has been reported, and if things have really fallen apart, then we should easily find that. We would expect a lot of people who had said they listened to us to be shifting to another radio station. The direction that shift goes in can tell us a lot. If we see no shift then maybe it's a bounce and we wait for the next Arbitrend.

Perceptual research is perhaps the most interesting kind of research we do. It's the research that we're probably the most nervous about. I don't think there is any one of us that, at some point, have felt that we didn't get our money's worth from a perceptual. There's a very high loaded gun kind of danger in this type of research.

Most of us have seen this situation: Over a year or two, you can have a couple of different research companies do a study for you, and when you think about it you notice that the results-the raw data-is just about the same. Yet when the people from the research companies present this data to you, they draw different conclusions and they make different recommendations. There is more going on here than just gathering information.

But focusing for a moment on the gathering of the information, we've found a couple of things over the years that will help control the aim of this gun. One is not to use too wide of a sample. Don't talk to people who, in reality, you have virtually no serious hope of getting to listen to your radio station in the next year. The old saying that "25-54 is not a demo, it's a family reunion" is valid.

The second thing is not to do too wide a range of topics. The easy test, of course, is "Is it actionable?" If you're a CHR are you going to do anything with the information you learned about people's interest in newscasts in afternoon drive? Probably not.

The next issue is the will to act. You should not study things that you're just not willing to act upon. When I programmed WLS, we did a very expensive survey of the Steve Dahl show that we had on in afternoon drive. We had a music CHR, an FM station, and basically a 41/2-hour talk show on in the afternoon. Surprisingly, the study came back and it said you're never going to get a music image with a 41/2-hour talk show in the afternoon. But neither Steve Dahl nor management had a desire to move this show. We knew that before we did the study. Fifteen years later, I still haven't figured out why we did that study.

The next thing about a perceptual to make it valuable is to be committed ahead of time to the follow-up. Companies that are good at perceptual research will be crafting a combination of elements and will fit them together so that they work in a synergetic way. But the fact is: until it's actually on the air and the audience hears these elements combined with your jocks, against your competition in your market over a period of time, it's still an intelligent guess.

You've thrown it against the wall and now you're going to have to make a mid course correction. You need to plan for it. A lot of times people will blow their entire budget on an initial study and make no allowances for follow-up. Then, of course, the station starts to drift and they really lose a lot of the original investment.

The final point about perceptual research is the need for smart analysis. This gets us back to my earlier observation that different research companies will draw different conclusions from similar results.

Primarily, the reason for that is, unlike a lot of other kinds of research, we're really not looking for facts in a perceptual.

We're looking for conclusions.

We're looking for knowledge.

Conclusions and knowledge come from the application of wisdom and experience and there are people in the industry that are really amazing. Of all the kinds of research this kind, more than any other is where you're going to have to be extremely careful when you choose your partner.

Your ratings results are a kind of programming feedback. But they are not as specific, timely, nor do they help you make the kinds of fine distinctions that you need. Approach carefully, but do embrace the tools available to help you reach higher levels of listener satisfaction. The payoff in terms of programming success will be exhilarating.

   

 

More ideas and information on this topic:

How to Set Your Research Priorities