So you’re submitting an article to a journal; based on the description of the journal, you think it’s a good fit, but how can you be sure?
On the flip side, you’re reviewing an article for a journal–you think it it’s a well-written paper, but on what bases should you evaluate it?
At DTRAP, we tackled this problem by creating a rubric. This is for authors, to determine if their paper fits the journal, and also for reviewers, who are evaluating papers for publication. The rubric neatly solves both problems.
Let’s pull the rubric apart and discuss each section.
- Novel/Utility
Your research should be novel. A new story, as it were. It should also be relevant to the field of Digital Threats. Otherwise, why would it be in this journal?
- Internal Validity
Consistency! The parts of the experiment or observation in the paper should make up the whole. That means that there’s enough in the paper to fully explain your results.
- External Validity
The experiment or observations in the paper should suitably represent the object of study’s place in the world. For example, if you’re drawing conclusions about Windows vulnerabilities, then only examining evidence about Linux vulnerabilities wouldn’t cut it.
- Containment
Don’t let your malware out in the wild where it can rampage and break things. Keep it in the cage where it belongs. In other words, your experiment shouldn’t affect the real world negatively (though it’s fine to affect the world in a positive way, such as taking down a malicious domain).
- Transparency in Method
No, magic didn’t solve the problem in the paper. There are underlying workings to the magic; share them.
- Transparency in Data
The paper analyzes data to demonstrate a result, but doesn’t clearly explain what data it used. This is a bad thing. Explain the data. Even better, share the data using ACM’s Badges.