Go back

A highly educated guess: How to build original data on the right research hypothesis

Author
Brooklin Nash
May 29, 2025

In an otherwise unpredictable time for content marketers, original research is a safe bet. It worksas a brand play, a content engine, and a pipeline generator. But if you don’t know how it works, it can seem like a risky investment of your limited time and budget. How do you make the most of original research?

By starting with an educated guess: your hypothesis. Since you can’t afford to take a shot in the dark, identifying the story you hope to find in the data really matters. Long before you send a survey, the right hypothesis can majorly de-risk your original research spend—and it does a whole lot more. It:

  • Focuses the final report from the start.
  • Aligns the project with business goals and marketing priorities.
  • Refines survey questions for better responses and results.

To understand what makes a strong hypothesis (and what to do if the data doesn’t agree with it) we talked to two original research experts:

Both drew on their years of experience in survey design and original research content to share their tools for honing a hypothesis.

Stage 1: Craft an educated guess

Many of us marketers probably heard the word “hypothesis” for the first time in elementary school during the unit on the scientific method. We learned that a hypothesis was an educated guess about whether plants would grow under fluorescent lights or a volcano would erupt if we combined baking soda and vinegar.

An original research hypothesis for B2B purposes isn’t much different. Erin and Becky point to three key qualities that make a good one:

  • Testability. “A hypothesis is a testable prediction that explains what you expect to happen,” Erin says. Becky adds that you need to be able to ask questions that either validate or invalidate the hypothesis.
  • Relevance. When your audience reads the story of your research, it should resonate with them. Ideally, they might think, “Yeah, I’ve wondered about that, too.”
  • Narrative. When you collect original data, your ultimate goal should be to tell a story. And a strong hypothesis, when proven out by data, begs follow-up questions—a wider narrative, as Becky calls it.

If your research hypothesis meets these criteria, you’re on track.

B2B original research might explore what your audience is struggling with, how they’re currently solving a problem, the industry shifts they’re seeing, or which new tools they’re leaning on. So your hypothesis might sound more like: 

Your hypothesis takes aim at how you think your audience will respond through the data before you ask a single survey question, and it shapes your report, from survey design to distribution.

One hypothesis to rule them all?

Should you have just one core hypothesis or several? Becky leans toward the latter approach. 

Start with an overarching narrative, she advises—the big-picture story you think your audience is experiencing. Within that, you should have multiple specific hypotheses as part of that narrative that you can test with separate question groups, all of which point back to your story. 

When we published the Closing the Content Gap report last year, our main gut feeling was that content was broken—there was a disconnect between content leads and the wider GTM team. We hypothesized things like:

  • Content marketers have room to improve their cross-departmental collaboration.
  • Non-content folks usually have a bad time contributing to content.
  • Leads and conversions are an overrated content metric.

The data supported these hypotheses. But because we had multiple hypotheses instead of just one, our research didn’t rise and fall on one hunch. Even if the data didn’t quite match one facet of our narrative, we would still have a story to tell. 

When you have one story with multiple hypotheses, your data-driven content stays nimble.

How to decide on your story

The story you chase with original research shouldn’t come out of thin air. If you survey a few hundred people about an out-of-the-blue hypothesis, you’re spending thousands of dollars to take a shot in the dark. That’s a pretty expensive and uneducated guess.

Any narrative that you pursue with original research needs to start with what you’ve already been saying to your audience—which is why you should work closely with product marketing to pick a narrative. Your friendly neighborhood product marketer has likely already put in a lot of legwork to nail down your brand messaging, so ask them for input on where original research could shore up your story.

Reflect on the claims you make through your website, demand gen campaigns, or content. What unique POV do you share, and what do you fundamentally believe about your industry, ICP, or problems in need of solving? Find opportunities to go deeper or places where original, proprietary data can make your argument stronger or uncover additional nuance. 

After you look inward for the story you’re already telling, look externally for what the market is signalling. Here, too, product marketing’s customer research will come in clutch. Do conversations with your customers and customer-facing teams reinforce your ideal research narrative? What gut feelings do they have that fit within your story and meet the criteria for a strong hypothesis? 

Now, you can shape your narrative—merge the big-picture market conditions and pressing problems with what you believe the solution is. Choose a handful of related, testable hypotheses you can use to create a survey that explores (and hopefully, supports) your story.

Stage 2: Turn your hypothesis into a well-crafted survey

You know the story you’d like to tell with original data. Now, you need to prepare to gather it.

Spoiler alert: Survey design is a huge field. Entire college courses exist to help folks learn the ins and outs—and if you’re not ready to take out student loans, we recommend resources like Typeform’s Survey School series and Coursera’s Data Collection classes to get you started.

So while we won’t cover everything there is to know here, Erin and Becky gave us a few guideposts to go from testable guess to survey questions.

1. Break down your hypotheses

If you hypothesize that combining baking soda and vinegar will make a volcanic eruption (and a huge mess in your kitchen), you can test it pretty quickly and straightforwardly: Mix the ingredients and see what happens.

You need a more subtle approach to testing your B2B research hypothesis. If your hypothesis is that marketers are burnt out, Question #1 shouldn’t be, “Are you burnt out? Yes or no?” Everyone’s definitions of “burnout” might be different, and there are a lot of ways for the question to go south. 

Instead, ask a series of questions that paint a picture of marketers’ burnout. You might ask:

  • Which of these symptoms have you experienced in the last six months? (With a multiple-choice list of burnout symptoms)
  • How many hours, on average, do you work per week? (With number ranges)
  • How satisfied are you with the support you receive from your leadership or colleagues? (On a scale from Extremely Satisfied to Extremely Dissatisfied)

This approach broadens the scope and increases the chance of meaningful, insightful results because it’s not pass-or-fail.

2. Lead survey takers through a story

You chose a narrative for your original research—walk your respondents through it with your questions. When Becky designs a survey, she guides respondents through tried-and-true rhythms: “Each section of questions matches a particular theme.” 

  • She might start with questions about their current reality, asking about the tools they use now or which problems they’re facing.
  • Next, she might ask about the most pressing research topic at hand. If it’s AI, for instance, she asks about their outlook on it or how AI helps them solve challenges. 
  • The survey might finish by asking about the future. What do respondents plan to do or adopt next, and what are their near- and long-term goals?

Becky’s seen surveys that don’tfollow a narrative flow and progress seemingly at random. It’s a poor experience for the survey taker and the content team: “They end up with a bunch of random data points that they’re now trying to force into a story.”

Survey takers’ minds are wired for stories. Give them one.

3. Be as impartial as possible

Even though your heart is set on one specific story, resist the urge to purposely steer your audience toward the results you want.

Erin recommends keeping your survey description relatively neutral and light on the details. It’s the difference between saying, “We’re hoping to understand whether eggs crack when thrown on the ground,” and saying, “We’re interested in learning about your experience throwing objects on the ground.” (Stick with the second option.) 

In our marketer burnout example, this might look like telling your respondents you’re looking to explore marketing work culture and team structure rather than trying to learn if they’re burnt out. 

Erin offers two more ways to avoid leading the witness:

  1. Crafting a multiple-choice question? Include a mix of both positive options that support your ideal narrative and negative options that would disprove your narrative. (And maybe some neutral options that don’t have much bearing on your narrative.)
  2. On a related note, randomize the order of the responses within your survey tool so that respondents don’t see all of the positive options first. “Even something seemingly innocuous like order can influence respondent behavior and impact your data quality,” Erin says. 

You put in the work to develop an informed story backed by the market and your team. Trust that the data will support it—don’t artificially inflate the results.

Stage 3: Uncover insights from the data

Now the magic happens: Survey responses start to roll in. But what if respondents don’t answer how you expected? Here’s the dos and don’ts of how to approach the data and keep your hypothesis front and center, even if responses go off-course. 

Do make a plan to pivot (if necessary) 

In an ideal world, every question response comes back exactly as you expected, completely supportive of your narrative. 

But when you’re surveying real people with nuanced experiences, you should make room for the possibility that one or more of your hypotheses won’t get the support you hope for. 

Remember Becky’s advice to have one overarching narrative with multiple sub-hypotheses? This strategy leaves you wiggle room in the survey design for some things to go differently than expected.

“The entire research project shouldn’t hinge on one hypothesis being absolutely right.” — Becky Lawlor

Before and as you start collecting data, consider how you’ll adjust if each element doesn’t come back as planned:

  • What different directions could you take your narrative that still support your brand messaging? 
  • Could you adjust some of your hypotheses on the fly or present them from a different angle and still stick with your ideal narrative? 
  • Are there multiple explanations your brand could present for divergent data?

If you make some space for these possibilities before you’re analyzing the full data set, you’re less likely to be blindsided if your audience surprises you.

Do soft launch your survey

After the first 10-15% of your data (of your ideal pool if you’re running the survey or the guaranteed total from a panel vendor), pause the survey and review the data. Are any of the questions getting strange or unhelpful responses? If so, is the wording or structure to blame?

Here are some of the adjustments you can make after soft launching:

  • If you spot opportunities for more clarity, rephrase the question.
  • If you need more information from respondents to make the question meaningful, add a follow-up. 
  • If people consistently answer the same thing in your open-ended “other” box, add it as a multiple-choice option.
  • If a question doesn’t make sense altogether, you might pull it. 

A soft launch is a chance toensure data and question quality. A soft launch isn’t a chance to check to see if the data matches your hypothesis and throw out a handful of questions that don’t.

That said, part of ensuring data quality is making sure you aren’t seeing anomalies across the board. While you’re not making question-level changes to support your narrative during a soft launch, it’s not a bad idea to check for sweeping red flags. “If four out of 20 findings don’t come back in our favor, that’s okay,” Erin says. “If 14 findings don’t come back in our favor, that’s obviously going to give me pause.”

If a large majority of questions return confusing or seemingly random results, you might need to reevaluate. Your survey design might be faulty, so it could be time to regroup and restart with different questions.

Do segment for deeper insights

Sometimes the problem isn’t that a question contradicts your narrative. Maybe the results just aren’t insightful. 

In this case, try cross-tabulating responses to try to uncover meaningful differences between different demographics within your audience:

  • Do larger or smaller companies feel differently about an issue? 
  • Do higher-growth versus lower-growth companies behave distinctly in a certain area? 
  • Do individual contributors view the problem differently than the C-suite? 

Break down your data into subgroups to find new patterns and insights worth reporting on. A strong narrative and a well-crafted survey lead to data and content your audience will be eager to dig into.

Don’t cherry-pick supportive data points

Your hypotheses make all the difference for the end product of your research. And if you’ve crafted them right, they can even carry you through if some of your data isn’t what you expected.

The B2B world has a buyer trust problem. Much of it stems from bad data practices—some brands report ROI in a purposely deceptive way, while others attribute data poorly (or not at all). If you want to be part of the solution instead of the problem, aim for integrity in your data reporting. 

Does that mean you have to include every data point, even those that don’t paint your brand in a great light? Not necessarily. (For instance, if you’re collecting data on customers’ reported impact with your product, maybe the time savings reported wasn’t as significant as you hoped.) But don’t tweak the numbers to make yourself look better in a misleading way. Your buyers are savvy, and they can often sniff out when a brand’s trying to misdirect them with bogus data. Audiences are also inherently more skeptical of, say, an ROI report than data that quantifies buyers’ problems or captures general sentiment—so tread carefully (and transparently) when talking about how amazing your product is.

The good news is that you can report on surprising data points without throwing out your hypothesis wholesale. Here’s how:

  1. Admit you were surprised. Note what you expected (AKA, what the hypothesis was) and how respondents answered instead. When you include data that doesn’t 100% support the narrative, you build trust with your reader. 
  2. Offer a potential explanation for the data, and find an angle that can comfortably co-exist with your story, if possible.

Once again, this is why you need a narrative that’s complex and durable enough to withstand even a handful of questions that don’t line up with your plans. Content leaders are creative, scrappy, and resilient—sometimes that means adjusting on the fly to report on data honestly and with nuance.

Don’t be afraid to shift the story

If you’ve laid the groundwork for the data with multiple hypotheses and a plan to adjust if necessary, you have what you need to actually listen to the data and report on it well.

Plan A is that all the data supports your hypotheses—and you can easily tell the exact story you set out to tell. Plan B looks like data that doesn’t 100% support your narrative beat for beat but that still tells you something interesting and insightful about your audience. 

As long as the data doesn’t outright contradict your product’s value proposition and position in the market, you can likely work with it. It just might be time to embrace one of the alternate angles you considered. Talk to your team (not just marketing, but customer-facing teams like CS and sales) about what you’re seeing in the data. Have they heard something from customers that might offer an explanation for the patterns you see in the data?

If the data surprises you, that could be a signal that the story the data reveals will be surprising and insightful to your audience, too. You just might be bringing something truly novel to the conversation—which means your ICP will eat it up, too.

“Consider surprising data findings a gift that allows you to improve your narrative.” — Erin Balsa

Original research starts and ends with a good guess

A research hypothesis is the compass for your research—not Google Maps. It doesn’t tell you every single exact turn your data will take. But each hypothesis offers essential direction and helps you chart a course to the right questions and the right interpretation of the data. 

Sure, sometimes the research journey will take a turn or two you didn’t expect; that’s part of the process. Build your original research on the foundation of audience-informed hypotheses, and you’re bound to end up at the ideal destination in the end: A solid, data-driven story and the buyer trust you were after all along.