There is a specific, repeatable failure sequence in market research fieldwork that most agencies have experienced at least once. It goes like this.
| Stage | What happens |
|---|---|
| Hour 1 | Study goes live. Links are sent. Suppliers confirm receipt. Everything looks normal. |
| Hours 2–6 | Completions are coming in. The PM checks the platform once and sees progress. She moves on to the other eleven studies she is managing. |
| Hours 6–18 | Something changes with one supplier. Completion rate drops. Or a quality anomaly begins accumulating. Or a quota cell is filling faster than the others, driven by a supplier who did not receive clear quota instructions. |
| Hours 18–36 | The problem is now embedded in the data. The quota cell that was overfilling has already exceeded its target. The supplier whose quality dropped has already contributed five hundred completions. The anomaly that started in hour eight has propagated through a significant portion of the dataset. |
| Hour 36–48 | The PM does her daily check. Or the supplier emails with an update. Or the platform shows a number that does not look right. The problem is now detected. |
| After detection | The options available are all worse than the options that were available at Hour 6. Re-field a quota cell. Clean a contaminated dataset. Extend the timeline. Explain the delay to the client. |
The problem in this sequence is not the PM's attention or capability. It is the absence of automated monitoring that would have surfaced the anomaly at Hour 6, when the range of available responses was still wide.
SoftSight — Real-time anomaly alerts. Every study. Every supplier. softsight.io