Six pitfalls to avoid when researching your next product

Published on: 20th August 2019

There are a number of potential pitfalls that a project team can fall foul of when it comes to planning and conducting an evidence-based product development programme. Six of the most common that we have observed over the years are:

 

1. Not doing any research

2. Conducting research for the wrong reasons

3. Doing the wrong kind of research

4. Biasing the outcomes

5. Poor translation of findings

6. Ignoring the results

 

1. Not doing any research

Like any design activity, research consumes precious resources. It takes time, money, and trained individuals to conduct high-quality design research. The commitment to this has to be carefully balanced with the project risk.

Furthermore, research also often requires additional planning and approvals. Ethics are a key consideration to ensure that any participants are treated fairly and lawfully. Compliance with personal data protection legislation (GDPR) is of paramount importance. Likewise, project confidentiality can be a serious concern – ensuring that ideas and design direction are not exposed to competitors or patent positions affected.

Given these challenges, it can be tempting to avoid research as a costly and time-consuming activity. However, the costs of developing the wrong product (financial, time delay, loss of reputation, lost sales and revenue), or developing the product in the wrong way, are almost always far greater, not to mention the rewards that flow from developing and launching the right products for your target markets.

 

2. Conducting research for the wrong reasons

Many enlightened organisations have embedded design research into their design processes. This is often aligned with ‘project gateways’ in such a way that it is not possible to progress a project without some form of evidence of stakeholder acceptance. Ostensibly, this is a great thing, as it should ensure that stakeholder voices are considered and allowed to shape the design throughout the design process. However, in certain cases, it can lead to research becoming a simplistic validation task (or in the worst case a tick-box exercise).

At the most basic level, it is important to keep intent in mind. If the aim of the research is simply to get through a gateway, then this typically means that the research approaches are simplified and insights often lost. The more positive alternative is to ensure that research is always conducted to learn something new – to challenge or confirm assumptions.

 

3. Doing the wrong kind of research

Not all research is equal. The choice of method will have a big impact on the data collected and the insights gained. One of the biggest distinctions is between qualitative and quantitative research. Qualitative research is exploratory.  It is generally performed with a relatively small number of participants to collect deep, open-ended insights. Typical approaches include focus groups, semi-structured interviews, and observations. Conversely, quantitative research generally involves collecting more structured, narrower data from a statistically significant number of people. As the name suggests, it is typically used to quantify attitudes or opinions and to generate statistical results. Typical approaches include online surveys or polls, or highly structured interviews focusing on selecting from pre-defined options.

The type of approach selected, and the way it is conducted will clearly affect the insights that can be gleaned. It’s not that any one method is better than another, but rather one may be far more fitting for a particular project, situation, and place in the design process. As a general rule of thumb, qualitative approaches work much better when exploring ideas, often in the early stages of the design process, as they allow user preferences and opinions to be explored in detail. Quantitative results tend to be better used at a later stage in the project to validate assumptions.

The important thing to remember is that methods should be carefully selected to fit the project, the stage in the design process, the level of uncertainty, and the questions that need to be answered.

 

4. Biasing the outcomes

Another risk is biasing the outcomes of the research, sometimes intentionally, but more often than not, unintentionally. This is something that is easy to fall foul of, and can happen based simply on the structure of the research approach. For example, if one of the products being discussed can be associated with the people asking the questions, the participants may be far more inclined to be polite and say good things about it in their responses. Another example might be leading questions – for example, “would you agree that concept B is the most usable?”

With careful consideration, these issues can be easily avoided – having someone trained in research in the process is often critical to this.

 

5. Poor translation of findings

Perhaps the most common pitfall, and perhaps the hardest to avoid, is failing to develop actionable recommendations from the research.

The classic example of this is where a report is delivered and simply placed on the shelf because the insights, and how they impact on the project work, are not clear to the project team. It is often the case that the data that was collected is good. However, the shortfall is that the insights have not been extracted in a meaningful way.

Another variant of this is where conclusions are drawn without supporting evidence or without clearly explaining the limitations of the study.

This is where having a detailed understanding of both research and the project can be of huge value. The goal in most projects should be to distil the most pertinent insights down to a single page of ‘so-whats’, where each insight is explained in relation to the project and summarised in actionable terms (along with an indication of confidence with which the insight can be relied upon).

 

6. Ignoring the results

Sometimes the results of the research can be inconvenient. They might suggest that a particular concept favoured by the project team is not the one to progress. Worse still, it may bring the whole viability of a project into question.

One extreme option is simply to ignore this research, to bury it and forget it ever happened. A less severe version of this is to cherry-pick, to ignore the negative results and focus on the positive parts that reaffirm the desired direction. Both are equally foolhardy and will result in further investment in a project that is unlikely to be successful. The far more prudent thing to do is to acknowledge that the research findings are valid and face the potential reality that previous assumptions were incorrect or have changed. While inconvenient at the time, the long-term gains in terms of product success are worth striving for.

 

Conclusions

From the sidelines, it can be very easy to dismiss all of these pitfalls as easily avoided. However, under the time pressures of live projects, they can be incredibly easy to fall in to. Awareness of them is the first step to avoiding them, not just amongst the researchers, but also across the wider project team.

The next step is to plan for them. Committing the time and resources upfront is critical, as is ensuring that the right people are involved in the process, the right tools are used and the results are correctly heeded.

In our experience, integration is often the secret to success. Research that is fully integrated into the design process tends to be the most effective and most efficient. Rather than confining research to a small number of set exercises along the development path, fully integrated research involves continually testing assumptions throughout the design process – continually steering the direction and detail of the product or service in question.

 

Article written by Dan Jenkins, Research Senior Skill Leader