Copywriters of ads have long been wary of how well ad copy-testing works–and not just because, like the rest of us, they are sensitive to criticism of their work. Should the writers of informational digital-content copy also be wary of copy-testing? Do the problems with ad copy-testing apply to content copy-testing?
Even top researchers (e.g., Arthur Kover, Journal of Advertising Research, 1996, 36:2) have acknowledged that the ad copywriters had legitimate concerns about how well ad copy-editing can indicate which version of an ad will be more effective or even whether the copy needs improvement. Many of the copywriters’ concerns can be boiled down to two major concerns: the survey environment is (1) too distraction-free and (2) too rational compared to the environment where ads are consumed. Let’s look at the thinking behind those two concerns.
- Copywriters have said that “the survey environment of copy-testing is too different from the distraction-filled environment in which the copy appears in real life.” This was unquestionably an issue when most of the copy being tested consisted of ads. In real life, ads often appear peripherally in cluttered settings or as undesired interruptions to the content people have opted to consume. So copywriters have to go to great lengths to make the interruption grab the attention of viewers. Early copy-testing methods rarely re-created these cluttered settings and, thus, underestimated the value of attention-getting ads. Later research methods partly solved this problem, by leading respondents to believe that they would be asked about the TV or magazine content they were to view (thus distracting respondents from the ads), but then asking them about the ads. Regardless, insufficient distraction is almost a non-issue when copy-testing digital content. In the real world, consumers opt to read digital content, usually by clicking a headline link or a link in an e-newsletter. This assures some level of attention will be given to the content. By the time the person has opted in, attention-getting has already been achieved (presumably by the content’s headline, which is less easy to test accurately via a survey). So survey pre-testing does not need to distract respondents away from the content, because respondents will generally not be distracted away from the digital content when consuming it in the real world.
- And copywriters have said that “the research process of filling out pages full of checkboxes evokes excessively rational responses.” This complaint was especially a concern when the copy being tested was full-screen TV ads whose central thrust often hinged on visual- and music-driven emotional appeal. Much of Web content is different. Even though the long-lasting success of Web content also depends on its emotional appeal—its ability to tell a story that deeply resonates—much Web content seeks to appeal as much to reason as to emotion. Also, these days, pre-testing is different. A fair amount of digital-content copy pre-testing occurs not in mall intercepts or telephone interviews, but on the Web. Gone is the experience of having to take pen to paper or having to answer questions directly to an interviewer. Now, the click-and-progress process of completing an online survey is fairly similar to how consumers progress from one piece of content to the next on the Web. In other words, both Web content and the process of getting to that content are already putting users into a relatively rational mode that isn’t all that different from how users complete a survey online. So the excess rationality of surveys relative to digital content is much less than the excess rationality relative to TV ads. Nevertheless, the excess rationality of surveys remains an issue to be guarded against. Researchers are working on improving the online survey environment to counter this, while at the same time insisting on avoiding introducing unusual screen backgrounds and interactions that, independent of the content being tested, would create their own dynamics, skewing results.
In sum, at least one of copywriters’ major concerns with ad copy-testing does not seem to apply to digital copy-testing: the lack of distraction in the survey environment no longer seems a problem, because people opt to read digital content rather than being interrupted by it. The other concern—that the survey process is too rationalistic—seems less severe than with TV ads, because much of Web content hinges on rational appeals. But this latter concern persists. When the content being tested is clearly making emotional appeals, good researchers will know to rely less on merely quantitative results and look toward qualitative-research learnings, perhaps obtained earlier in the content-development process, or from open-ended or oblique emotion-detecting questions included in the formal copy-testing.
Rufen Sie uns an unter +49 1805015480