Special Issue on the Digital Threats of Hardware Security

Guest Editors
Aydin Aysu, North Carolina State University, USA
Scott Graham, US Air Force Institute of Technology, USA 

From a cybersecurity perspective, hardware has often been viewed as the trusted part of computing systems and the last fortress of defense against potential attackers. However, advanced threats such as StuxNet made it clear that the air gap is no longer a defense. And more recent threats such as Meltdown/Spectre demonstrate that hardware architecture itself can be used as a Digital Threat. Indeed, the attack surface for modern systems may now include programmable logic controller (PLC) rootkits, fake or cloned chips that deliver degraded or malicious behavior, hardware Trojans that can be utilized as kill switches, and near invisible chips that are not part of original board designs. Unfortunately, hardware attacks have also become easier to develop, cheaper to deploy, and more feasible in more systems. If hardware can no longer be considered the effortless root of trust, then new theory, techniques, and tools must be developed to address these problems and to ensure secure design, synthesis, integration, and deployment of hardware components.

DTRAP aims to create a special issue highlighting the methods by which hardware can be used as a Digital Threat, or as a countermeasure.  Topics include: 

  • Case studies on hardware-based threat effectiveness and countermeasures 
  • Hardware tampering and tamper-resistance 
  • Hardware Trojans and Backdoors 
  • Reverse engineering and countermeasures 
  • Clone identification and anti-counterfeiting 
  • Security in reconfigurable hardware
  • Hardware-based security primitives (PUFs, RNGs) 
  • Side-channel attacks and defenses 
  • Fault attacks and countermeasures 
  • Emerging hardware authentication primitives 
  • Automatic identification of hardware security-critical parts 
  • Security-aware architectures 
  • Application of machine learning to hardware security 
  • Integration of secure hardware with higher level software, firmware, and micro-architectures 
  • Implementation of next-generation cryptography (homomorphic encryption, post-quantum, lightweight, etc.)

DTRAP invites both academics and practitioners to submit papers for consideration in the journal.  Extended conference papers are welcome, provided they have been revised to include at least 25% new content (and the previously published proceedings paper is submitted along with the research paper).  Practitioners are welcome to submit a journal style research paper or a “Field Notes” paper.  A “Field Notes” paper is a short case report about emerging threats and developments, emphasizing a particular development or application over a rigorous study or proof of security.

To submit to this special issue, please visit ACM Manuscript Central at mc.manuscriptcentral.com/dtrap and select paper type “Special Issue on the Digital Threats of Hardware Security.” Details of the author guidelines for DTRAP are available at dtrap.acm.org/authors.cfm

Important Dates

Paper submission due: July 15, 2020
Initial review feedback: October 15, 2020
Revision due: December 15, 2020
Final review decision: March 2021 

Getting from Research to Practice

The field of medicine has discussed moving Research into Practice in many papers. I looked at commonalities in three papers before, this time I’m going to consider this paper:

Davies, Barbara L. “Sources and models for moving research evidence into clinical practice.” Journal of Obstetric, Gynecologic, & Neonatal Nursing 31.5 (2002): 558-562

The paper discussed two methods that have been considered for moving Research into Practice.

One is a called the Cochrane Collaboration. This is a database of evidence-based studies in medicine. From the website, “Our vision is a world of improved health where decisions about health and health care are informed by high-quality, relevant and up-to-date synthesized research evidence.”

With so much research out there, the Cochrane Collaboration endeavors to create a database of the best evidence based research for consumption by practitioners. This doesn’t actually accomplish the task of moving research into practice, what it does is lets practitioners know what research has been done well and is a good candidate for use by practitioners.

The second method is best practice guidelines. From the paper:

Clinical practice guidelines are usually constructed by an expert panel and incorporate the results of relevant systematic reviews about a topic, as well as other types of research and the consensus views of expert clinician and researcher panel members.

These guidelines should be evidence based with recommendations for policy and practice. Per the paper, these are one of the most effective tools for improving health care quality.

What can we as Cybersecurity researchers take from this? First of all, we need more evidence based results. Secondly, best practice guidelines should be created by a group of experts, preferably by a panel that includes both researchers and practitioners. And thirdly, we need a central repository for both.

DTRAP supports these goals. Submit your evidence based research or your best practice guidelines to https://dtrap.acm.org/

Hackers Can Turn Your Home Computer into a BOMB!

Twenty years ago, the Weekly World News published an article that warned you that hackers can make your PC explode:  

Be warned!  Having that device in your house is dangerous, be wary of the possible explosion and fire!  DDOS attacks are nothing compared to the bomb that waits for you in the future, or phishing, or spam.  Someone can craft an email that will turn that PC of yours into a grenade

The National Vulnerability Database has vulnerability data going back to 2002.  I downloaded it, and guess what.  Not a single vulnerability is about fire, explosion, boom, or even grenade.  I even looked for smoke.  No smoke was found.  Nothing about turning a computer into a grenade by opening the wrong email.

This article was written in a way to spread fear and distrust.  Why should you have a personal computer in your house if it’s going to blow up?  It’s dangerous!

This twenty year old article is Fake News.  Fake News isn’t a new problem, it’s an old problem with a new coat of paint.  

DTRAP has a special issue on Fake News and its threats to the Digital Ecosystem.  Write your own article and submit it today at https://dtrap.acm.org/

Negative Results are a Good Thing

Before I discuss negative results, I want to make sure we’re all on the same page.

Suppose you set up an experiment It’s a long and involved process and at the end, you have some results.  Congratulations!

It’s time to write things up which means you go over the experiment in depth.  Unfortunately, you realize that you made a mistake in your experiment.  Your results aren’t right.

Some people think that this is a negative result.  It isn’t, it’s a mistake.  You made a mistake in your experiment and the results are incorrect.  Call it an oops, call it an error, but what you don’t do is publish it.  What you do is redo the experiment, fixing your mistake.

Now let’s suppose your experiment was done correctly. Suppose your hypothesis is that all blue domains are malicious (Yes, I know this is a silly hypothesis.  Work with me on it.)  You set up your experiment and at the end of the day you discover that out of the 300 million domains you’ve examined, only 535 are blue.  That’s a minuscule percentage of the total domains, in fact, it’s 0.000195% of the domains.  In fact, it’s a statistically insignificant number of domains.  Of those blue domains, 500 are malicious.  It’s still statistically insignificant.

This is a negative result.  This is the result that should be reported.  Because:

  • It prevents others from repeating work that will produce negative results.
  • Science isn’t only about proving positives; it’s about proving negatives.  All results are needed to be good science.
  • It can skew the discussion if the negative results aren’t known.

DTRAP is committed to publishing negative results in Cybersecurity.  Submit your paper to https://dtrap.acm.org/

Practical Implementations Please

One of the goals of DTRAP is to bridge the gap between researchers and practitioners.  Accomplishing this is a difficult task, I’ve talked about problems in the medical field before. 

This time I’m going to talk about something different.  This paper

Ferguson, Jennifer L. “Putting the “what works” research into practice: An organizational perspective.” Criminal Justice and Behavior 29.4 (2002): 472-492.

mentions a topic that I think is very interesting and applies to the research and practice of Digital Threats.

In the paper, the author says:

The lack of attention to the practical implementation of what works is problematic for organizations that wish to engage in effective correctional services.

I think that quote is not just applicable to Criminal Justice but to Digital Threats as well.

For example, suppose I design a new method for detecting malware on IoT devices.  My method is 99.999% effective but requires that each IoT device runs a separate process for the malware detection.

There are problems with this.  First, most IoT devices don’t have update paths.

 This new method won’t be applicable to old IoT devices.  Secondly, most IoT devices are minimalistic.  Running an additional process for malware detection may be more than these devices can handle.

My paper that discusses the new method for detecting malware can’t just be about my really cool method.  It also needs to discuss the problem of the practical implementation in order for it to be a practical result.  Otherwise, it is an interesting paper that won’t be implemented due to the problems discussed above.

DTRAP wants to bridge this gap, so we encourage papers with practical solutions. Show us how your amazing result can be used in practice. Submit your work at https://dtrap.acm.org/

How Do You Solve a Problem Like Mirai?

In this post I talked about the motivations for DDOS. Political acts, extortion, revenge… all typical motivations.

That doesn’t really account for the Mirai DDOS. The first one on September 19, 2016 was motivated by gaming. The attacker went after the company that hosted a tool used by Minecraft against DDOS. Then there was an even bigger one on October 12. According to the FBI, that one was also motivated by Minecraft.

Let’s take a step back and look at the Mirai botnet. First, it was IoT devices, the first known attack that originated from them. The botnet was created by taking advantage of default passwords and telnet and other vulnerabilities, but I’m going to concentrate on those two

I’m concentrating on those two factors because they are issues we have known about for years. There are 147 CVEs that mention default passwords. This goes back to 1999, but that’s only because CVEs were first recorded in 1999. telnet has had a secure replacement available since 2005, so we shouldn’t still be relying upon it.

And yet, still IoT devices use both insecure methods, allowing Mirai to cause havoc at least twice. We learned to use ssh and other secure access methods on computers attached to the Internet, but it appears we didn’t learn the same lesson with IoT.

This is a case where we really could have benefited from hindsight.

After the DDOS, we fixed everything that caused this and now everything is great, right?

Well, no, not so much. Mirai is still out there and has been updated to look for new vulnerabilities and it still uses default passwords and telnet. We still haven’t learned and it’s still possible for IoT devices to be easily added to a botnet.

Studying the causes of this problem as well as proposing potential solutions is a good topic for a Field Note for DTRAP. If this is in your field of expertise or you’d like to propose a solution, write one.

Minding the Air Gap

Once upon a time (I always wanted to start a blog post like that) the air gap considered a primary defense. If your computer wasn’t connected in any way to the Internet, then it was safe. Safe from viruses, safe from external attackers, it the gap between the computer and the Internet protected your computer,

In 2010, that all changed. Stuxnet was a malicious computer virus that installed itself via a USB drive. Even if your computer was not connected to the Internet, it was still vulnerable. In particular, it attacked SCADA systems, but the end result holds true. Any computer/computing device, whether or not it is actively connected to the Internet, is vulnerable.

What does this say to me? It says that we have to be careful of our assumptions. We assumed that because a computer wasn’t on the Internet, it wouldn’t get a virus. My first encounter with a virus was actually transmitted by floppy disk, so the idea of an air gap solving all was only after the Internet was prevalent.

We forgot what we learned from floppy disks and had to relearn it with USBs.

Going forward, what else can we learn from past events? George Satyana said “Those who cannot remember the past are condemned to repeat it.”. We need to learn from history, which is the point of the column “With the Benefit of Hindsight”. If there an historical event that resonates with you, then consider the event. What did we learn from it, what should we have done differently, and what can we do going forward.

Submit your article to https://dtrap.acm.org/

Special Issue on Fake News

Fake news, especially on social media, is now viewed as one of the main digital threats to democracy, journalism, and freedom of expression. Our economies are not immune to the spread of fake news either, with fake news being connected to stock market fluctuations and massive trades. The goal of this special issue is to promote exchange of research and studies that (1) aim to understand and characterize fake news and its patterns and how it can be differentiated from other similar concepts such as false/satire news, misinformation, disinformation, among others, which helps deepen our understanding of fake news; and (2) systematically detect fake news by determining its credibility, verifying its facts, assessing its style, or determining its propagation. To facilitate further research in fake news, this special issue especially welcomes research articles, new open access datasets, repositories, and benchmarks for fake news research, broadening research on fake news detection and its development.

Our guest editors for the issue are:

  • Reza Zafarani, Syracuse University
  • Huan Liu, Arizona State University
  • Vir V. Phoha, Syracuse University
  • Javad Azimi, Facebook

Topics of interest for the issue include:

  • Patterns of Fake News
    • Internet measurements on Fake News
    • User behavior analysis with respect to Fake News
    • Patterns of Fake News Distribution/Consumption/Response
    • Tracing and characterizing the propagation of fake news and true news
  • Fake News Detection
    • Supervised Fake News Detection
    • Semi-Supervised Fake News Detection
    • Unsupervised Fake News Detection
    • Early Detection of Fake News
    • Deep Nets for Fake News Detection
    • Representation for Fake News
  • Mining of News Content
    • Text Mining of News Content
    • Analysis of Images, Videos, and Audio
  • Fake Checking
    • Knowledge-based (e.g., Knowledge-graphs) analysis
    • Analyzing News Credibility/Credibility Assessment
    • Analyzing Source Credibility
  • Malicious Entity Detection
  • Fake News Benchmarks
  • Fake News Datasets
  • Fake News Open Repositories

We welcome two types of research contributions:

  • Research manuscripts reporting novel methodologies and results (up to 25 pages)
  • Benchmark, Datasets, Repositories, and Demonstration Systems that enable further research and facilitate research on fake news.These papers should be of interest to the broad fake news research community(10pages + links to such systems)

Visit https://dtrap.acm.org/ to submit your manuscript and for important dates and deadlines.

DDOS Me? No, DDOS You!

When someone is murdered, the detective usually asks the question “cui bono?”. Well, they may not ask it in Latin, but they want to know who benefits from this person’s death.

I wanted to know the same thing when a DDOS attack is executed. Some of the largest were done not for gain, but for revenge. For example, Spamhaus was DDOSed in 2014 by a group calling themselves Stophaus.

I can summarize this attack as “You made me mad, so I’m gong to knock you off the net.”

Who gains from this? The person making the attack gets satisifaction, usually accompanied by the bully’s battle cry “Look what you made me do!”. No monetary gain there at all. The defender has to buy a defense to keep themselves on the network, so they lose money and the only one to gain in this situation is the provider of the DDOS defense.

On the other hand, there are businesses receiving extortion letters. Pay us off or we DDOS you. Apparently there’s little follow through on it, but if you pay off a blackmailer expect to keep paying. The best bet is to strengthen your defenses.

Again, the attackers gain nothing monetarily and it’s the DDOS defense companies who make money.

DDOS is also occasionally a political act between countries.

There’s no money changing hands here either, except for the defenders.

The exception to all of this, of course, is if you buy your DDOS from someone else. Then they get money. It’s shockingly easy to buy a DDOS.

IP Booters are also DDOS providers
DDOS cheap!
Professional Quality DDOS

Knowing all of this, what can we do to interrupt the DDOS economic model?

If you have ideas or something you think will spark interest, write aField Note and submit it at https://dtrap.acm.org/ Share your ideas with researchers and practitioners and help bridge the gap.

Malware and Caves and Dragons, oh my!

In thispost I talked about sampling bias.  I’m going to expand more on that subject, because I think it’s interesting and useful.

Cavemen is a popular trope.  We like to think that our ancestors spent all their time in caves, because that’s the evidence we’ve found.  The caves protect the artifacts, so of course they’re found there. That doesn’t mean that our ancestors spent all their time in caves at all.  It just means that’s where the evidence is.

We run into similar problems when studying malware.  We often use a sandbox to contain the malware, but that often means we only see what the malware authors want us to see.  We’re stuck in a cave, in other words.

The problem is that outside of the cave is the potential to affect other computers who aren’t interested in being part of the experiment.  We want to know how malware works outside of the cave, but as old maps used to say, “Here be dragons”. (Well, they didn’t say that, but it works for this analogy).  Outside of the cave, dragons. Inside of the cave, misinformation.

We want containment to avoid the dragons, but at the same time we must be aware that we are dealing with potential misinformation.  This makes generalizing on malware difficult, but in order to study it appropriately, we must try.

DTRAP promotes containment in studying all cybersecurity, not just malware.  We want to be responsible researchers and not release the dragons.