Before I discuss negative results, I want to make sure we’re all on the same page.
Suppose you set up an experiment It’s a long and involved process and at the end, you have some results. Congratulations!
It’s time to write things up which means you go over the experiment in depth. Unfortunately, you realize that you made a mistake in your experiment. Your results aren’t right.
Some people think that this is a negative result. It isn’t, it’s a mistake. You made a mistake in your experiment and the results are incorrect. Call it an oops, call it an error, but what you don’t do is publish it. What you do is redo the experiment, fixing your mistake.
Now let’s suppose your experiment was done correctly. Suppose your hypothesis is that all blue domains are malicious (Yes, I know this is a silly hypothesis. Work with me on it.) You set up your experiment and at the end of the day you discover that out of the 300 million domains you’ve examined, only 535 are blue. That’s a minuscule percentage of the total domains, in fact, it’s 0.000195% of the domains. In fact, it’s a statistically insignificant number of domains. Of those blue domains, 500 are malicious. It’s still statistically insignificant.
This is a negative result. This is the result that should be reported. Because:
- It prevents others from repeating work that will produce negative results.
- Science isn’t only about proving positives; it’s about proving negatives. All results are needed to be good science.
- It can skew the discussion if the negative results aren’t known.
DTRAP is committed to publishing negative results in Cybersecurity. Submit your paper to https://dtrap.acm.org/