Practical Implementations Please

One of the goals of DTRAP is to bridge the gap between researchers and practitioners.  Accomplishing this is a difficult task, I’ve talked about problems in the medical field before. 

This time I’m going to talk about something different.  This paper

Ferguson, Jennifer L. “Putting the “what works” research into practice: An organizational perspective.” Criminal Justice and Behavior 29.4 (2002): 472-492.

mentions a topic that I think is very interesting and applies to the research and practice of Digital Threats.

In the paper, the author says:

The lack of attention to the practical implementation of what works is problematic for organizations that wish to engage in effective correctional services.

I think that quote is not just applicable to Criminal Justice but to Digital Threats as well.

For example, suppose I design a new method for detecting malware on IoT devices.  My method is 99.999% effective but requires that each IoT device runs a separate process for the malware detection.

There are problems with this.  First, most IoT devices don’t have update paths.

 This new method won’t be applicable to old IoT devices.  Secondly, most IoT devices are minimalistic.  Running an additional process for malware detection may be more than these devices can handle.

My paper that discusses the new method for detecting malware can’t just be about my really cool method.  It also needs to discuss the problem of the practical implementation in order for it to be a practical result.  Otherwise, it is an interesting paper that won’t be implemented due to the problems discussed above.

DTRAP wants to bridge this gap, so we encourage papers with practical solutions. Show us how your amazing result can be used in practice. Submit your work at https://dtrap.acm.org/

How Do You Solve a Problem Like Mirai?

In this post I talked about the motivations for DDOS. Political acts, extortion, revenge… all typical motivations.

That doesn’t really account for the Mirai DDOS. The first one on September 19, 2016 was motivated by gaming. The attacker went after the company that hosted a tool used by Minecraft against DDOS. Then there was an even bigger one on October 12. According to the FBI, that one was also motivated by Minecraft.

Let’s take a step back and look at the Mirai botnet. First, it was IoT devices, the first known attack that originated from them. The botnet was created by taking advantage of default passwords and telnet and other vulnerabilities, but I’m going to concentrate on those two

I’m concentrating on those two factors because they are issues we have known about for years. There are 147 CVEs that mention default passwords. This goes back to 1999, but that’s only because CVEs were first recorded in 1999. telnet has had a secure replacement available since 2005, so we shouldn’t still be relying upon it.

And yet, still IoT devices use both insecure methods, allowing Mirai to cause havoc at least twice. We learned to use ssh and other secure access methods on computers attached to the Internet, but it appears we didn’t learn the same lesson with IoT.

This is a case where we really could have benefited from hindsight.

After the DDOS, we fixed everything that caused this and now everything is great, right?

Well, no, not so much. Mirai is still out there and has been updated to look for new vulnerabilities and it still uses default passwords and telnet. We still haven’t learned and it’s still possible for IoT devices to be easily added to a botnet.

Studying the causes of this problem as well as proposing potential solutions is a good topic for a Field Note for DTRAP. If this is in your field of expertise or you’d like to propose a solution, write one.

Minding the Air Gap

Once upon a time (I always wanted to start a blog post like that) the air gap considered a primary defense. If your computer wasn’t connected in any way to the Internet, then it was safe. Safe from viruses, safe from external attackers, it the gap between the computer and the Internet protected your computer,

In 2010, that all changed. Stuxnet was a malicious computer virus that installed itself via a USB drive. Even if your computer was not connected to the Internet, it was still vulnerable. In particular, it attacked SCADA systems, but the end result holds true. Any computer/computing device, whether or not it is actively connected to the Internet, is vulnerable.

What does this say to me? It says that we have to be careful of our assumptions. We assumed that because a computer wasn’t on the Internet, it wouldn’t get a virus. My first encounter with a virus was actually transmitted by floppy disk, so the idea of an air gap solving all was only after the Internet was prevalent.

We forgot what we learned from floppy disks and had to relearn it with USBs.

Going forward, what else can we learn from past events? George Satyana said “Those who cannot remember the past are condemned to repeat it.”. We need to learn from history, which is the point of the column “With the Benefit of Hindsight”. If there an historical event that resonates with you, then consider the event. What did we learn from it, what should we have done differently, and what can we do going forward.

Submit your article to https://dtrap.acm.org/

Special Issue on Fake News

Fake news, especially on social media, is now viewed as one of the main digital threats to democracy, journalism, and freedom of expression. Our economies are not immune to the spread of fake news either, with fake news being connected to stock market fluctuations and massive trades. The goal of this special issue is to promote exchange of research and studies that (1) aim to understand and characterize fake news and its patterns and how it can be differentiated from other similar concepts such as false/satire news, misinformation, disinformation, among others, which helps deepen our understanding of fake news; and (2) systematically detect fake news by determining its credibility, verifying its facts, assessing its style, or determining its propagation. To facilitate further research in fake news, this special issue especially welcomes research articles, new open access datasets, repositories, and benchmarks for fake news research, broadening research on fake news detection and its development.

Our guest editors for the issue are:

  • Reza Zafarani, Syracuse University
  • Huan Liu, Arizona State University
  • Vir V. Phoha, Syracuse University
  • Javad Azimi, Facebook

Topics of interest for the issue include:

  • Patterns of Fake News
    • Internet measurements on Fake News
    • User behavior analysis with respect to Fake News
    • Patterns of Fake News Distribution/Consumption/Response
    • Tracing and characterizing the propagation of fake news and true news
  • Fake News Detection
    • Supervised Fake News Detection
    • Semi-Supervised Fake News Detection
    • Unsupervised Fake News Detection
    • Early Detection of Fake News
    • Deep Nets for Fake News Detection
    • Representation for Fake News
  • Mining of News Content
    • Text Mining of News Content
    • Analysis of Images, Videos, and Audio
  • Fake Checking
    • Knowledge-based (e.g., Knowledge-graphs) analysis
    • Analyzing News Credibility/Credibility Assessment
    • Analyzing Source Credibility
  • Malicious Entity Detection
  • Fake News Benchmarks
  • Fake News Datasets
  • Fake News Open Repositories

We welcome two types of research contributions:

  • Research manuscripts reporting novel methodologies and results (up to 25 pages)
  • Benchmark, Datasets, Repositories, and Demonstration Systems that enable further research and facilitate research on fake news.These papers should be of interest to the broad fake news research community(10pages + links to such systems)

Visit https://dtrap.acm.org/ to submit your manuscript and for important dates and deadlines.

DDOS Me? No, DDOS You!

When someone is murdered, the detective usually asks the question “cui bono?”. Well, they may not ask it in Latin, but they want to know who benefits from this person’s death.

I wanted to know the same thing when a DDOS attack is executed. Some of the largest were done not for gain, but for revenge. For example, Spamhaus was DDOSed in 2014 by a group calling themselves Stophaus.

I can summarize this attack as “You made me mad, so I’m gong to knock you off the net.”

Who gains from this? The person making the attack gets satisifaction, usually accompanied by the bully’s battle cry “Look what you made me do!”. No monetary gain there at all. The defender has to buy a defense to keep themselves on the network, so they lose money and the only one to gain in this situation is the provider of the DDOS defense.

On the other hand, there are businesses receiving extortion letters. Pay us off or we DDOS you. Apparently there’s little follow through on it, but if you pay off a blackmailer expect to keep paying. The best bet is to strengthen your defenses.

Again, the attackers gain nothing monetarily and it’s the DDOS defense companies who make money.

DDOS is also occasionally a political act between countries.

There’s no money changing hands here either, except for the defenders.

The exception to all of this, of course, is if you buy your DDOS from someone else. Then they get money. It’s shockingly easy to buy a DDOS.

IP Booters are also DDOS providers
DDOS cheap!
Professional Quality DDOS

Knowing all of this, what can we do to interrupt the DDOS economic model?

If you have ideas or something you think will spark interest, write aField Note and submit it at https://dtrap.acm.org/ Share your ideas with researchers and practitioners and help bridge the gap.

Malware and Caves and Dragons, oh my!

In thispost I talked about sampling bias.  I’m going to expand more on that subject, because I think it’s interesting and useful.

Cavemen is a popular trope.  We like to think that our ancestors spent all their time in caves, because that’s the evidence we’ve found.  The caves protect the artifacts, so of course they’re found there. That doesn’t mean that our ancestors spent all their time in caves at all.  It just means that’s where the evidence is.

We run into similar problems when studying malware.  We often use a sandbox to contain the malware, but that often means we only see what the malware authors want us to see.  We’re stuck in a cave, in other words.

The problem is that outside of the cave is the potential to affect other computers who aren’t interested in being part of the experiment.  We want to know how malware works outside of the cave, but as old maps used to say, “Here be dragons”. (Well, they didn’t say that, but it works for this analogy).  Outside of the cave, dragons. Inside of the cave, misinformation.

We want containment to avoid the dragons, but at the same time we must be aware that we are dealing with potential misinformation.  This makes generalizing on malware difficult, but in order to study it appropriately, we must try.

DTRAP promotes containment in studying all cybersecurity, not just malware.  We want to be responsible researchers and not release the dragons.

A Little More Conversation

Every bit of research begins with an idea. I wonder if… Is it possible that… Could we maybe…

These are sentences I’ve started when doing my research. Things like, I wonder if this malware uses that vulnerability and what does it mean for the future. Is it possible that malware authors are collaborating? Could we learn about the economic model of DDOS providers and use that to subvert them?

On the other hand, practitioners often start with a problem. How can I stop… What will let me… What should I expect…

I’ve started the same sentences when I worked for a startup ISP. How can I stop the malicious traffic? What will let me find malicious actors on my network? What should I expect if I enact a new security policy?

We, as a field, need to communicate these questions between researchers and practitioners. The questions from a practitioner should spark ideas to a researcher, helping them focus their research on current problems. On the other hand, the questions a researcher poses should also be considered by a practitioner. Are their questions the solutions to your future problems?

The goal of the Field Notes article type designed by DTRAP is to spark that research and the questions. As a practitioner, is there something you wish the researchers could answer? Write a Field Note that outlines the question, why it’s important, and what you think of the future.

Field Notes are also open to researchers, we want to hear what you think is a future problem. Will practitioners feel the same way? We want to use them to bridge a conversation between both groups.

Submit your Field Note to https://dtrap.acm.org/ and help us have a little more conversation.

Sampling Bias

The amount of data in Cybersecurity research is sometimes overwhelming.  To get around this amount of data, we sample. That means we choose a subset of the overwhelming ocean of data and deal with a much smaller pond.

We must be careful when we do this sampling, it’s possible to introduce sampling bias.   This is when some parts of the data ocean are more likely to be picked than other parts. Continuing with the ocean analogy, it’s when you pick all your samples out of the Bermuda Triangle but ignore the entire Atlantic Ocean.  Your results aren’t generalizable to the entire Atlantic Ocean in that case, they’re only relevant to the Bermuda Triangle.

Which is exactly the problem with sampling bias.

Returning to the world of Cybersecurity, suppose you’re studying network traffic.  If you only study the traffic that your organization sees, then that’s not generalizable to the world at large, unless you have a very good reasoning behind it.  The statement ‘My Organization is just like the Internet’ is one that requires a lot of explanation. It’s possible that it’s true, from a certain point of view, but you must explain that point of view.

DTRAP wants your results to be generalizable, that is, to have external validity.  Avoid sampling bias in your research.

Moving From Research to Practice

Moving research in the area of cybersecurity into practice often is difficult for many reasons.

I decided to look at another field which has similar issues, medicine, and read a few papers on the topic. These papers include:

  • Glasgow, Russell E., et al. “The future of health behavior change research: what is needed to improve translation of research into health promotion practice?.” Annals of Behavioral Medicine 27.1 (2004): 3-12.
  • Berwick, Donald M. “Disseminating innovations in health care.” Jama 289.15 (2003): 1969-1975.
  • Glasgow, Russell E., and Karen M. Emmons. “How can we increase translation of research into practice? Types of evidence needed.” Annu. Rev. Public Health 28 (2007): 413-433.

These papers considered the barriers medical research faces before it can become clinical practice. Each paper considered various barriers, but I was interested in one result that all three had.

The complexity of the result from research was a key indicator of whether doctors would easily integrate the result into practice. The more complex the result, the harder it is for doctors to move the result from a research environment into a clinical practice. Since doctors and nurses are under pressure to provide more and more support to patients with fewer resources, the addition of a complex procedure is often resisted.

Can we consider the same thing in Cybersecurity?

Yes and no, it depends on how we define complexity, which is very tricky to define. Do you add complexity when you buy an entire new defensive system, or is that just a question of expenditures? Or if we define new procedures for users to avoid certain attacks, is that complexity? The details of what makes something complex will not transfer to cybersecurity from medicine.

We can, however, take lessons from medicine and apply them to cybersecurity. For example, the perceived benefit of a change has a lot to do with its adoption. That seems intuitively to apply to cybersecurity as well. If a change is complex, but promises to defend your organization from all phishing attacks, then the complex change may be worth the mental cost of understanding it. But we have very few such broad assurances in cybersecurity. Perhaps safest to search for simple, explainable improvements.

This is a topic that we need to study further so we can understand what difficulties are present and how we can do better. DTRAP wants to bridge the gap between researchers and practitioners and I believe this is an important part of that.

Special Issue on Cyber Risk and Insurance

We’re proud to announce a special issue on Cyber Risk and Insurance with editors Firasat Khan from the Metropolitan State University, Minnesota and Tirthankar Ghosh from the University of West Florida, Florida.

Businesses and organizations are increasingly aware of the level of cyber risk they face and cyber risk has consistently ranked as a top risk for businesses and organizations.  Cyber risk insurance promises to provide increased assurance of business continuity through risk transfer but the rapidly changing landscape of cyberattack surfaces, vectors, and resulting impact, introduces unique challenges that should be better understood by practitioners and researchers alike.

Total cyber risk insurance premiums, just in the United States, are at $3-4B today and are forecasted to rapidly rise to $15-20B in the next decade by most estimates, and a similar growth in cyber risk insurance is expected globally.  The bulk of the growth is from small business reacting to major stand-alone instances of cyber breaches and failures.  A key question that remains a concern is what would happen if there were a widespread cyber incident that impacted hundreds or thousands of insured firms simultaneously.  The relatively low loss ratio today ensures the cyber insurance market is growing – but what happens if a major widespread catastrophe would come to pass? How the insurance industry would be impacted is an open question.

While the intersection of cyber risk and insurance is a complex domain to navigate, the Insurance Core Principles (ICPs) developed by the International Association of Insurance Supervisors (IAIS) do present widely accepted principles, standards, and guidance for the insurance sector. The National Association of Insurance Commissioners (NAIC) have recently published helpful guidance outlining ICPs that relate to cyber risk (Supervision of cyber risk, Enterprise risk management, and Information sharing and cooperation).  Cyber incident and breach experience and emerging applied knowledge need to be reconciled with accepted insurance principles.  Application of the appropriate framework of understanding and applying responsive insurance products is very much needed.

This DTRAP special issue on Cyber Risk and Insurance seeks to highlight recent developments in how organizations should evaluate their cyber risks, review that their controls are appropriate, and ensure appropriate insurance coverage. The latter is particularly relevant in light of clients complaining of denied claims under their cyber insurance coverage when the coverage might not have been adequate in the first place.

Topics include:

  • Cyber Risk Assessment
  • Cyber Risk Mitigation
  • Cyber Risk Modeling
  • Cyber Risk Reporting and Dashboards
  • Cybersecurity Controls and Standards
  • Cybersecurity Compliance
  • Cybersecurity Metrics and Measurements
  • Cybersecurity Maturity Models
  • Cybersecurity Economics and Shared Responsibility in Cyber Market
  • Cyber Insurance
  • Cyber Value-at-Risk (CVaR)
  • Business Interruption

Expected contributions – We welcome two types of research contributions:

  • Research manuscripts reporting novel methodologies and results (up to 25 pages).
  • Experience papers (field notes) that report on lessons learned from addressing specific issues within the scope of the call. These papers should be of interest to the broad cyber risk management community. (10+ pages plus an optional appendix).

Important dates and timeline:

Initial submission:                    August 30, 2019

First review:                             November 30, 2019

Revised manuscripts:               December 31, 2019

Second review:                        February 15, 2020

Camera-ready manuscripts:      March 30, 2020

Publication:                             May 30, 2020