• 22
  • March

LAMPSHADE

Dear Diary,

I just finished the first draft of the LAMPSHADE paper. I realize now that I haven’t tweeted about this project (my bad!), so here’s a quick description: it’s a survey-driven thought leadership piece that’s intended to serve as a TOFU awareness generator.

OK, you’re all caught up.

Anyway, so my client worked with an analyst—aside: some of you know how I feel about analysts (what a delightfully ambiguous statement!)—to conduct a short survey of a bunch of industry experts. They gave me the Excel sheet of the survey responses and my job is to produce a smashing paper that turns all these numbers and things into a compelling narrative.

Easy-peasy-lemon-squeezy.

But sometimes lemons squirt you in the eye (and it stings, and you can’t make lemon-aid with lemon juice and eye goo). In this case, that squirt took the form of two very poorly constructed questions. Now, I’m not sure who wrote the questions, and I suspect there was lots of back-and-forth, but the fact is that these two questions are not constructed in a valid manner. It’s clear enough to me what everyone thought they were asking, and why they presented the survey participants with the options that they did, but unfortunately that effort has gone to waste.

Now, to be clear, these aren’t leading questions—that’s good, because I can’t stand leading questions. I think surveys should try to find truth, should try to learn something, and the only thing leading questions teach is that leading questions are dumb (and you should already know that).

No, these questions are poorly constructed in a few ways.

The first poorly constructed question (PCQ) is a two parter, of the form: “Do you use X to achieve Y and Z?” Then it presents four options: one is a straight “yes” and three are variations of “no”. OK, so two things are wrong here:

  • First, the two-part question leave a tonne of room for ambiguous interpretation: some people will treat that “and” super strictly, some will be more lenient
  • Second, combining Y and Z into one question limits the survey’s ability to learn about each independently (which would also let us correlate)
  • Third, the three “no” answers are not mutually exclusive and they don’t cover the whole domain of possible “no” answers

There’s very little we can do, with any confidence, based upon the answers to this question.

The second PCQ builds extends from the first. It goes, “If you use X to achieve Y, then what type of X do you use?” The available responses include a bunch of types of X plus another option.

ARGH!

Let’s examine some of the ways this question makes my life more complicated:

  • It’s a conditional, structured to suggest that only respondents who answered “yes” to the preceding question can answer—but in looking at the raw data, I can see that the responses include participants who said “yes” in the preceding question AND participants who had one of the “no” answers
  • Note that this second question only links X to Y, not to “and Z”…so we’ve basically just thrown Z out into the trash, and I’m not sure why
  • The available answers are weak: the examples of types of X aren’t comprehensive
  • It’s a single-response question, even though it’s perfectly reasonable that a respondent could have multiple X solutions in place

There are two things that are frustrating about these questions, collectively:

  1. I can clearly see what it is the survey’s trying to discover, but the questions are so poorly constructed that we can’t actually get at those insights—that’s lousy for the report, and it’s lousy for the client (who has missed an opportunity to learn)
  2. Their poor construction forces me to have to explain to the client that we can’t do anything with these question; maybe they’ll understand, or maybe they’ll think I’m being unnecessarily particular—either way, it’s a conversation that’ll take away productive time from one of my many other projects

Now look, crafting effective surveys isn’t easy. I recently helped a client do a survey-driven report—we took our time crafting the questions and I’m pleased to say the report turned out really, really well, but looking back there are still some things we could’ve done to improve the survey.

I suspect in the LAMPSHADE case, the PCQs are the result of a design-by-committee mixed with a crunched timeline stirred up with a curse of knowledge (as in, everyone making the survey knew what they were trying to get at). And I suspect most people tasked with writing a report from the survey responses would just dutifully draw faulty conclusions from bad data.

But I’m not most people, Cromulent isn’t most marketing agencies, and we hold our work to a very high standard—and that includes data analysis—so I guess I’ll be having a conversation about why those two questions are, essentially, throwaways.

(the report’s still gonna be really good, though)

Latah!

/L