Special Issue on Fake News

Fake news, especially on social media, is now viewed as one of the main digital threats to democracy, journalism, and freedom of expression. Our economies are not immune to the spread of fake news either, with fake news being connected to stock market fluctuations and massive trades. The goal of this special issue is to promote exchange of research and studies that (1) aim to understand and characterize fake news and its patterns and how it can be differentiated from other similar concepts such as false/satire news, misinformation, disinformation, among others, which helps deepen our understanding of fake news; and (2) systematically detect fake news by determining its credibility, verifying its facts, assessing its style, or determining its propagation. To facilitate further research in fake news, this special issue especially welcomes research articles, new open access datasets, repositories, and benchmarks for fake news research, broadening research on fake news detection and its development.

Our guest editors for the issue are:

  • Reza Zafarani, Syracuse University
  • Huan Liu, Arizona State University
  • Vir V. Phoha, Syracuse University
  • Javad Azimi, Facebook

Topics of interest for the issue include:

  • Patterns of Fake News
    • Internet measurements on Fake News
    • User behavior analysis with respect to Fake News
    • Patterns of Fake News Distribution/Consumption/Response
    • Tracing and characterizing the propagation of fake news and true news
  • Fake News Detection
    • Supervised Fake News Detection
    • Semi-Supervised Fake News Detection
    • Unsupervised Fake News Detection
    • Early Detection of Fake News
    • Deep Nets for Fake News Detection
    • Representation for Fake News
  • Mining of News Content
    • Text Mining of News Content
    • Analysis of Images, Videos, and Audio
  • Fake Checking
    • Knowledge-based (e.g., Knowledge-graphs) analysis
    • Analyzing News Credibility/Credibility Assessment
    • Analyzing Source Credibility
  • Malicious Entity Detection
  • Fake News Benchmarks
  • Fake News Datasets
  • Fake News Open Repositories

We welcome two types of research contributions:

  • Research manuscripts reporting novel methodologies and results (up to 25 pages)
  • Benchmark, Datasets, Repositories, and Demonstration Systems that enable further research and facilitate research on fake news.These papers should be of interest to the broad fake news research community(10pages + links to such systems)

Visit https://dtrap.acm.org/ to submit your manuscript and for important dates and deadlines.

DDOS Me? No, DDOS You!

When someone is murdered, the detective usually asks the question “cui bono?”. Well, they may not ask it in Latin, but they want to know who benefits from this person’s death.

I wanted to know the same thing when a DDOS attack is executed. Some of the largest were done not for gain, but for revenge. For example, Spamhaus was DDOSed in 2014 by a group calling themselves Stophaus.

I can summarize this attack as “You made me mad, so I’m gong to knock you off the net.”

Who gains from this? The person making the attack gets satisifaction, usually accompanied by the bully’s battle cry “Look what you made me do!”. No monetary gain there at all. The defender has to buy a defense to keep themselves on the network, so they lose money and the only one to gain in this situation is the provider of the DDOS defense.

On the other hand, there are businesses receiving extortion letters. Pay us off or we DDOS you. Apparently there’s little follow through on it, but if you pay off a blackmailer expect to keep paying. The best bet is to strengthen your defenses.

Again, the attackers gain nothing monetarily and it’s the DDOS defense companies who make money.

DDOS is also occasionally a political act between countries.

There’s no money changing hands here either, except for the defenders.

The exception to all of this, of course, is if you buy your DDOS from someone else. Then they get money. It’s shockingly easy to buy a DDOS.

IP Booters are also DDOS providers
DDOS cheap!
Professional Quality DDOS

Knowing all of this, what can we do to interrupt the DDOS economic model?

If you have ideas or something you think will spark interest, write aField Note and submit it at https://dtrap.acm.org/ Share your ideas with researchers and practitioners and help bridge the gap.

Malware and Caves and Dragons, oh my!

In thispost I talked about sampling bias.  I’m going to expand more on that subject, because I think it’s interesting and useful.

Cavemen is a popular trope.  We like to think that our ancestors spent all their time in caves, because that’s the evidence we’ve found.  The caves protect the artifacts, so of course they’re found there. That doesn’t mean that our ancestors spent all their time in caves at all.  It just means that’s where the evidence is.

We run into similar problems when studying malware.  We often use a sandbox to contain the malware, but that often means we only see what the malware authors want us to see.  We’re stuck in a cave, in other words.

The problem is that outside of the cave is the potential to affect other computers who aren’t interested in being part of the experiment.  We want to know how malware works outside of the cave, but as old maps used to say, “Here be dragons”. (Well, they didn’t say that, but it works for this analogy).  Outside of the cave, dragons. Inside of the cave, misinformation.

We want containment to avoid the dragons, but at the same time we must be aware that we are dealing with potential misinformation.  This makes generalizing on malware difficult, but in order to study it appropriately, we must try.

DTRAP promotes containment in studying all cybersecurity, not just malware.  We want to be responsible researchers and not release the dragons.

A Little More Conversation

Every bit of research begins with an idea. I wonder if… Is it possible that… Could we maybe…

These are sentences I’ve started when doing my research. Things like, I wonder if this malware uses that vulnerability and what does it mean for the future. Is it possible that malware authors are collaborating? Could we learn about the economic model of DDOS providers and use that to subvert them?

On the other hand, practitioners often start with a problem. How can I stop… What will let me… What should I expect…

I’ve started the same sentences when I worked for a startup ISP. How can I stop the malicious traffic? What will let me find malicious actors on my network? What should I expect if I enact a new security policy?

We, as a field, need to communicate these questions between researchers and practitioners. The questions from a practitioner should spark ideas to a researcher, helping them focus their research on current problems. On the other hand, the questions a researcher poses should also be considered by a practitioner. Are their questions the solutions to your future problems?

The goal of the Field Notes article type designed by DTRAP is to spark that research and the questions. As a practitioner, is there something you wish the researchers could answer? Write a Field Note that outlines the question, why it’s important, and what you think of the future.

Field Notes are also open to researchers, we want to hear what you think is a future problem. Will practitioners feel the same way? We want to use them to bridge a conversation between both groups.

Submit your Field Note to https://dtrap.acm.org/ and help us have a little more conversation.

Sampling Bias

The amount of data in Cybersecurity research is sometimes overwhelming.  To get around this amount of data, we sample. That means we choose a subset of the overwhelming ocean of data and deal with a much smaller pond.

We must be careful when we do this sampling, it’s possible to introduce sampling bias.   This is when some parts of the data ocean are more likely to be picked than other parts. Continuing with the ocean analogy, it’s when you pick all your samples out of the Bermuda Triangle but ignore the entire Atlantic Ocean.  Your results aren’t generalizable to the entire Atlantic Ocean in that case, they’re only relevant to the Bermuda Triangle.

Which is exactly the problem with sampling bias.

Returning to the world of Cybersecurity, suppose you’re studying network traffic.  If you only study the traffic that your organization sees, then that’s not generalizable to the world at large, unless you have a very good reasoning behind it.  The statement ‘My Organization is just like the Internet’ is one that requires a lot of explanation. It’s possible that it’s true, from a certain point of view, but you must explain that point of view.

DTRAP wants your results to be generalizable, that is, to have external validity.  Avoid sampling bias in your research.

Moving From Research to Practice

Moving research in the area of cybersecurity into practice often is difficult for many reasons.

I decided to look at another field which has similar issues, medicine, and read a few papers on the topic. These papers include:

  • Glasgow, Russell E., et al. “The future of health behavior change research: what is needed to improve translation of research into health promotion practice?.” Annals of Behavioral Medicine 27.1 (2004): 3-12.
  • Berwick, Donald M. “Disseminating innovations in health care.” Jama 289.15 (2003): 1969-1975.
  • Glasgow, Russell E., and Karen M. Emmons. “How can we increase translation of research into practice? Types of evidence needed.” Annu. Rev. Public Health 28 (2007): 413-433.

These papers considered the barriers medical research faces before it can become clinical practice. Each paper considered various barriers, but I was interested in one result that all three had.

The complexity of the result from research was a key indicator of whether doctors would easily integrate the result into practice. The more complex the result, the harder it is for doctors to move the result from a research environment into a clinical practice. Since doctors and nurses are under pressure to provide more and more support to patients with fewer resources, the addition of a complex procedure is often resisted.

Can we consider the same thing in Cybersecurity?

Yes and no, it depends on how we define complexity, which is very tricky to define. Do you add complexity when you buy an entire new defensive system, or is that just a question of expenditures? Or if we define new procedures for users to avoid certain attacks, is that complexity? The details of what makes something complex will not transfer to cybersecurity from medicine.

We can, however, take lessons from medicine and apply them to cybersecurity. For example, the perceived benefit of a change has a lot to do with its adoption. That seems intuitively to apply to cybersecurity as well. If a change is complex, but promises to defend your organization from all phishing attacks, then the complex change may be worth the mental cost of understanding it. But we have very few such broad assurances in cybersecurity. Perhaps safest to search for simple, explainable improvements.

This is a topic that we need to study further so we can understand what difficulties are present and how we can do better. DTRAP wants to bridge the gap between researchers and practitioners and I believe this is an important part of that.

Special Issue on Cyber Risk and Insurance

We’re proud to announce a special issue on Cyber Risk and Insurance with editors Firasat Khan from the Metropolitan State University, Minnesota and Tirthankar Ghosh from the University of West Florida, Florida.

Businesses and organizations are increasingly aware of the level of cyber risk they face and cyber risk has consistently ranked as a top risk for businesses and organizations.  Cyber risk insurance promises to provide increased assurance of business continuity through risk transfer but the rapidly changing landscape of cyberattack surfaces, vectors, and resulting impact, introduces unique challenges that should be better understood by practitioners and researchers alike.

Total cyber risk insurance premiums, just in the United States, are at $3-4B today and are forecasted to rapidly rise to $15-20B in the next decade by most estimates, and a similar growth in cyber risk insurance is expected globally.  The bulk of the growth is from small business reacting to major stand-alone instances of cyber breaches and failures.  A key question that remains a concern is what would happen if there were a widespread cyber incident that impacted hundreds or thousands of insured firms simultaneously.  The relatively low loss ratio today ensures the cyber insurance market is growing – but what happens if a major widespread catastrophe would come to pass? How the insurance industry would be impacted is an open question.

While the intersection of cyber risk and insurance is a complex domain to navigate, the Insurance Core Principles (ICPs) developed by the International Association of Insurance Supervisors (IAIS) do present widely accepted principles, standards, and guidance for the insurance sector. The National Association of Insurance Commissioners (NAIC) have recently published helpful guidance outlining ICPs that relate to cyber risk (Supervision of cyber risk, Enterprise risk management, and Information sharing and cooperation).  Cyber incident and breach experience and emerging applied knowledge need to be reconciled with accepted insurance principles.  Application of the appropriate framework of understanding and applying responsive insurance products is very much needed.

This DTRAP special issue on Cyber Risk and Insurance seeks to highlight recent developments in how organizations should evaluate their cyber risks, review that their controls are appropriate, and ensure appropriate insurance coverage. The latter is particularly relevant in light of clients complaining of denied claims under their cyber insurance coverage when the coverage might not have been adequate in the first place.

Topics include:

  • Cyber Risk Assessment
  • Cyber Risk Mitigation
  • Cyber Risk Modeling
  • Cyber Risk Reporting and Dashboards
  • Cybersecurity Controls and Standards
  • Cybersecurity Compliance
  • Cybersecurity Metrics and Measurements
  • Cybersecurity Maturity Models
  • Cybersecurity Economics and Shared Responsibility in Cyber Market
  • Cyber Insurance
  • Cyber Value-at-Risk (CVaR)
  • Business Interruption

Expected contributions – We welcome two types of research contributions:

  • Research manuscripts reporting novel methodologies and results (up to 25 pages).
  • Experience papers (field notes) that report on lessons learned from addressing specific issues within the scope of the call. These papers should be of interest to the broad cyber risk management community. (10+ pages plus an optional appendix).

Important dates and timeline:

Initial submission:                    August 30, 2019

First review:                             November 30, 2019

Revised manuscripts:               December 31, 2019

Second review:                        February 15, 2020

Camera-ready manuscripts:      March 30, 2020

Publication:                             May 30, 2020


The Benefit of Hindsight

Twenty years ago, the Melissa Virus was big news.

It was the first mass mailing virus that spread itself from systems that used Word and Outlook using macros. Using the first fifty contacts it could find, it would spread itself from user to user. Clogging up networks, overloading mail servers, and generally causing havoc.

Today, the goal of such software is to make money. The malware authors invest time and energy in their software and they want a return on that investment. Back then, the end goal of the Melissa Virus was apparently to just cause havoc, which it did very well. For some organizations, the load on the email server was too much and they shut down. While it wasn’t the goal of the software to make money, it did cause some organizations to lose money.

A researcher found the author of the virus, thanks in part to Usenet. He was sent to jail for 10 years, only serving 20 months.

Of course, we don’t see this kind of event today. Most events we see, even on broad scale such as this, have making money at the core. On the other hand, we can consider what we could learn from such an event. We created the column With the Benefit of Hindsight to examine these events. We want to know how past events changed the landscape, what we can learn from them, and what insights can we use from them to affect the future.

If there is an event that interests you, why not write a column for us on the topic. We want to hear from you on how the event changed the landscape, or changed how you reacted. Would you do things differently now from what you did then? Submit a column to https://dtrap.acm.org/ and let us hear from you.

The Principle of Containment

If a human virus researcher decided to test a new virus they’ve created by just releasing it to the world and seeing what happens, then that’s the start of a horror movie. Needless to say, this is something they try to avoid.

As Cyber Security researchers, we should try the same. Releasing a virus into the wild just to see what happens is an extreme example, but still, it’s something to avoid. Malware is often run in a sandbox to prevent this, but malware authors often do their best to avoid the sandbox. It’s a never ending game, the malware researchers make a move to study the malware and the authors make a move to avoid the study.

It’s tempting to skip the sandbox to study malware. If malware attempts to hide from the sandbox, take the sandbox away. This isn’t quite the same as releasing a computer virus, but it is still affecting the Internet. Malware is malicious for a reason, it wants to spread, control computers, steal, and in general, cause trouble.

If we don’t contain the software, then we become part of the problem while we’re trying to solve it. But if we contain the software, the software knows this and acts accordingly. It’s a Catch-22 situation and one we have to navigate carefully in research.

DTRAP supports the use of containment in research, as we don’t want our research to affect the Internet while we study Digital Threats.

Jumping to Conclusions Without a Parachute

This study discusses whether or not it’s more useful to wear a parachute when jumping out of an airplane. It comes to the surprising conclusion that it didn’t matter if you wore a parachute or not, you had the same chance of surviving.

The catch, of course, was that the study only looked at drops of 5 feet of height.

Thinking about this in terms of Cybersecurity, this is like doing a study on a very small office and writing it up as if the study was done on the entire Internet. Then at the end of the paper, casually mentioning “This study was done on an office of ten people”, or skipping that part entirely.

Or twisting the words in a write up to make it sound scarier than it is. “This vulnerability will crash every system ever!” sounds terrifying, right? But what if that vulnerability will only work if you have physical possession of the machine as well as having admin access and only on Tuesdays. (I exaggerate here a bit, of course). By the time you have physical possession of the machine and admin access, why would you need to use the vulnerability? You can crash the machine to your heart’s content anyway.

There’s at least two things to take away from this. One, be careful of your study. If you want to be generalizable, then consider your study. Is it generalizable or are you talking about a specific situation? Jumping from an airplane sounds generalizable to all parachuters, until you realize it’s only five feet from the ground. Keep your work relatable.

And try to avoid sensationalism. It’s tempting to attract public attention by making grandiose statements, but really, does that help the situation by crying wolf over minor problems?