Mean vs Median — Cage Match

This is another one of those blog posts that I hope will help you do research, it’s not necessarily about DTRAP. It’s also not directly about a cage match, sorry!

If you’ve ever taken statistics, one of the first things you learn is mean, median and mode.  I’m going to talk about what they mean, as opposed to how to compute them.  

It all goes back to Carl Frederich Gauss.  It was a warm summer’s evening when…

Yeah, I won’t do that.  What I will talk about is the Normal Distribution, also known as the Gaussian distribution or the Bell Curve.

That’s our friend the normal distribution.  One property it has is that the mean, median  and the mode have the same value, right there in the center of the distribution.  In this example, the mean, median and mode are all 0.

That means the balance of the data is in the center. The mean of a distribution is that place that if you balance the distribution on a point at the mean, half the weight is one one side and half of it is on the other.

The median is where the middle of the data is. It’s a counting exercise that knows nothing about the values in the data itself. The mode is a similar counting exercise. You look for the value in your data with the highest frequency.

In the case of a normal distribution, everything is balanced at the center and we’re all happy about it. But life and data aren’t often normal, so…

What if the mean was 73.15 and the median was 2. What does the data look like then?

In this case, it looks like that picture.  The data has a few points which are really large and other than that, they’re mostly kind of small.  The large data points skew the mean though.

  However, if we look at the median, it’s 2.  That means half the data points are 2 or smaller than that and half are larger. But if we balance the data on a stick, that’s going to happen around 73.15 and there’s a lot of data on the left side and not so much on the right.

Mean and median tell us two things about where the center of the data is. One looks at the weight, or balance of the data and the other counts and says “Here, in this spot, this is the middle of your data”.

Which leads me to my next point.

If we rely just on mean and median without actually plotting the data, we don’t actually know what it looks like.  A single number doesn’t describe a set of data, as much as we’d like it to.

The two numbers combined can tell us something though, even if we didn’t have that plot.  The median of 2 tells me that half the numbers in my data set are small.  The mean of 73.15 combined with that tells me there’s some really big numbers in that data because otherwise, I’d expect it to be around 2… if my data was normally distributed.

One number isn’t enough to describe data without more context, keep that in mind.

Writing Reviews — Revisited

I’ve written before about writing a review and there’s even guidelines on the DTRAP website. I wanted to revisit the topic because I think it’s useful.

Writing a good review is a skill, you’re not going to get it right the first time but there’s a few things I want to talk about that can help you.  This time with examples.

A review should be useful to the person who wrote the paper.  It should tell them what they did right, what they missed, and how they can improve.  No paper is completely perfect but the author won’t know how to make it better if we don’t tell them.

First, I know I’ve said this many times before, but be respectful in your review.  Saying:

This paper sucks.

Is neither helpful nor polite.  It’s a very bad review, so please don’t do that.  No one who reads that review is going to take anything away from it about how to improve their work, just that the author of the review couldn’t be bothered to do it right.  And is disrespectful to the author.  A disrespectful review can put people off fixing their work and even doing research.  Don’t be that person.

Suppose someone submits a paper on researching vulnerabilities and the people that find them.  You’re a vulnerability researcher (I’m making all of this up) and we ask you to review it.  You read it and find the paper very interesting and well written but it’s completely missing the impact of Vulnerability DIsclosure Programs on researchers and vulnerabilities.  (Did I mention I’m making all of this up?  Because I am.)

If your review only focuses on the positives of the paper, ‘well written!’, ‘very interesting!’, ‘useful to researchers in vulnerabilities!’ then you’re missing out on part of your job as a reviewer.  You should let the authors know that they missed this information.  It’s important to the field and omitting that information makes the paper that much weaker.  You should write something like:

The authors have written a very good paper, however, I am concerned by the omission of Vulnerability Disclosure Programs (VDPs).  They have been shown (maybe add some citations!) to affect vulnerability researchers and the vulnerabilities found.  The authors should revise and include this information and how it affects their research or they should explicitly detail why it doesn’t affect their results.

See?  Useful.  Tells the authors what to fix and how to fix it.  And written respectfully.  Writing something like:

I can’t believe the authors omitted Vulnerability DIsclosure Programs.  Any researcher worth their salt knows they’re important.  This paper is completely useless without it.

Is a very bad review.  Don’t do that. 

 I’m going to revisit this topic often because it’s important to a functioning journal.  The reviewers keep us going once we get the submissions and good reviews are a skill.

Erecting a DTRAP Column

Unlike most academic journals, DTRAP has two columns.  This blog post is to talk about those columns and hopefully inspire you (my reader) to write one.

First, I should talk about why you should write a column as opposed to a blog post.  Or at least, why I think you should.

Anyone can write a blog.  I’m writing a blog, if I can do it, you can do it.  I could even write a blog post about how all ransomware will make the color of your screen turn slightly purple.  I made that up, by the way.  If you come across this post via google, that statement is not true.

Though if you quote it somewhere, please cite me.  I will be entertained!

The point is, there’s generally no vetting process for blogging.  Companies may have a process but a single person can post whatever they want.  I mean, I could even say a DDOS will turn your network yellow (again, another fictional statement).

If you write a column for DTRAP, we have an expert in that field read over the column and tell us if they think it’s worth publishing.  In other words, what you’re getting is a check mark from an expert that this is worth something.  It’s more than just a random thought, it’s a real idea that someone says has merit.

Also, blogs can go away.  A column in DTRAP is archived by ACM and… will never go away.

Third, DTRAP has prestige.  We’re an ACM journal, something that we publish has more prestige than a random blog.  I’m not saying your blog is random, I’m just giving you reasons why I think you should write for us.

So, now that I covered why, I should cover what. 

DTRAP has two columns, one is ‘WIth the Benefit of Hindsight’ and the other is ‘From Research to Practice’.

The author George Santayana said “Those who cannot remember the past are condemned to repeat it.”.  Cybersecurity isn’t an incredibly old field, compared to say, medicine, but we do have history.  We want you (our author) to take a historical event and look at it with current eyes.  What did we learn from it?  What could we have done differently to prevent it?  How can we keep something like that from happening now?

One of the first events I dealt with was the Alternic incident.  I had to explain to management where I worked what happened and how it affected us.  What did I learn from it?  What could have we done differently to prevent it?  Those are questions I could address if I wrote a column on it.  Or you could write one on an event that you remember or learned about.  What about the Morris Worm?  Or the I-Love-You virus?  You could even consider an event that only affected you locally.  What did you learn from your first DDOS?

That’s the first column.  The second column is on “From Research to Practice”.  Have you ever taken an idea from a research paper and used it at your organization?  How did you make it work?  What did you learn from it?  Almost every idea we use today started with someone saying “I wonder if this’ll work…”.  Like Firewalls or IDS or Antivirus. Someone had the idea and in some cases wrote a paper. That idea moved from the paper (research) to a program on the computer (practice). This is never a smooth transition, but you can always learn how to smooth it out from the process.

You can also use this column as a way to talk to researchers.  If you have seen a new trend in practice that you think a researcher could use, write about it.  Tell the researcher why you think it’s important.

If you have any questions about writing columns, contact us.  We’re here to help!

Writing a Review

The first time I reviewed a paper, I felt like I was back in school again.  There was a timer, a proctor, a paragraph I had to read and then I had to choose the ‘correct’ answer or I got the question wrong.

Or back taking a Literature class.  I’d read a short story or a book and then I’d have to write a paper about it.  I’d have to choose the ‘right’ interpretation of the short story or book or… I’d fail that essay.  My favorite English Literature professor in college told us that as long as we could justify it from the work, we could have any interpretation we wanted.  There was no ‘right’, which was really nice. I wasn’t trying to guess what interpretation the professor was looking for, I was just trying to justify my opinion.

The same thing is true in writing a review, there’s no right. There’s you reading the paper and using your best judgement.  No professor is going to take your review and go to town on it with a red pen.  The first person to read your review will be the Associate Editor and they’re going to treat it like your opinion, not something that needs to be graded.  You aren’t back in class, you aren’t worried about failing, you’re just giving your opinion on a paper.

And I do have some tips about the process. Things that will make it easier for you to review.

First, read the paper from beginning to end.  I usually print it out and make notes on it, or use my iPad and make notes on it, or even read it on my computer and (you guessed it) make notes on it.  These are notes for you for when you actually write the review, no one is going to read them.   I look for things like ‘This is interesting!’ or ‘Wait… I read a paper on that’ (I make a note to go look up the paper if necessary), or ‘Um… wait, that data set has problems, I’ve tried to use it’.  I”m looking for things to catch my eye.  I might circle entire paragraphs and make a note to double check what it said.  

The first read through is mainly to get an idea of what it said and what interests me about the paper or what problems I see immediately.  If the author(s) neglected to tell me the origin of the data they used, I make a note.  If they give me a URL, I might go look.  My main goal in that first read is to answer the question ‘what can this paper teach me that I don’t already know?’.

Then I go look at the review questions.  DTRAP has several and since I’m co-EiC of DTRAP, well, I’ll just use those as my example.

  1.  What is this paper about?
  2. How does this paper contribute to the field of Digital Threats?  What are the strengths of this paper?
  3. How can the paper be improved? What are its weaknesses? How can it be strengthened?
  4. Is this paper of potential interest to developers and engineers?

Using my notes, I consider each question. If I’m not sure, I go back and read the paper again. The first time I spent a lot of time going back and forth, by the third time I had enough experience to make my notes on my first read more useful for the actual review.

I  hope those questions are straight forward.  The DTRAP Reviewer Guidelines discusses more about what we expect from the answers to them, please refer to it if you aren’t sure.  If you’re still unsure, contact either the AE in charge of the paper (he or she is the person that asked you to review) or one of the co-Editors-in-Chief of the journal.  We want to help.  We want you to feel comfortable reviewing, but we know it’s a new skill to pick up.

I do want to say whatever you write in answer to those questions, remember you’re speaking to the author.   Be polite and respectful, always.  

Operating Systems Operate

When I got into this business, the distribution of operating systems was completely different from today.  People had personal computers but there were also mainframes and one of the first systems I learned on was a VAX.  The idea of carrying a computer meant picking up the desktop and moving it to another room.  Considering how heavy they were, that wasn’t done often.

Of course, times change, and now everyone has a computer (or two, or three) on their person.  The threats of 30 years ago aren’t applicable to today. I was interested in this change, so I decided to look at trends in operating systems and found a data set.

This data set covers between January 2009 and December 2016.  Definitely more recent than what I started with, but not up to date. It was the data I found, so I decided to use it.

I started with some basic data analysis to look at the change between beginning to end.

In 2009, WinXP was still high on the list with 74.36%.  In 2016 it had dropped down to 2.24%.  Additionally, Android grew from 0 to 37.8%.  This is 4 years ago, I’m sure the numbers have changed.

On April 8, 2014, Microsoft no longer provided support for Windows XP.  That means, as of that date, there were no more security patches or updates provided.   And yet, as of four years ago, there were still quite a few systems on the Internet that ran Windows XP.  

I started writing this partly because someone mentioned finding a Windows 98 system on a network recently. I had to wonder how much vulnerability do we have to using out of date systems, so I started by looking at what those systems are.

And it is a vulnerability. Operating systems that are no longer updated for security problems but are still on the Internet are very vulnerable to attack. What steps should we take to mitigate this going forward? This isn’t a “Buffer overflow of the week” vulnerability but a vulnerable system that could affect the entire network.

DTRAP has a special issue on vulnerabilities. We want to hear from you on vulnerabilities, how to mitigate them, and how to coordinate.

Reviewing Basics

A journal lives and dies not just by the people that submit to it, but the reviewers who read the papers and determine if they’re fit for the journal.

I’m going to talk about the review process in this blog post.  It’s a great way to contribute to a journal and to the field.  It sounds intimidating though, so I hope this demystifies it a bit.

First, this isn’t like in school where you submitted a paper, someone marked it up with a red pen, and handed it back.   I know why teachers use red pens (it’s easier to read) but seeing my paper covered with red ink was never a happy thing.

Instead, this is you reading the paper and giving us your opinion on it.  There’s no right or wrong here, it’s your opinion.  This isn’t a school essay you’re grading, this is most definitely your opinion.  I think I said ‘your opinion’ three times in this paragraph, well, four if you count this sentence, but I want to reiterate that.  You can’t get this wrong.

In most journals, you read the paper, you answer some questions about the paper, and then you have to make a choice about what you think should be done with the paper.  I’m going to talk about the four options you have.

  • Reject:  This doesn’t mean the paper is bad. It can mean that the paper doesn’t fit the journal.  For a journal on digital threats, a paper about paper airplanes is a bad fit and shouldn’t be accepted.  It could also mean that there’s nothing novel in the paper.  For example, if the paper proposes a method to filter traffic at the network level but sounds suspiciously like a firewall, then that’s not a new method.  That’s an old method someone is trying to publish.  What you’re really saying about the paper is there are problems in the paper that can’t be fixed.  
  • Major Revisions:  In this case, you’re saying that you like the outcome of the paper but there’s major problems with the methods or data that they used.  You think that those problems can be fixed and you want to give the authors a chance to fix them.
  • Minor Revision:  In this case there’s small problems.  You want the author (or authors) to create the best paper they can and you’ve found a few things that should be fixed.  Maybe the graph is labelled incorrectly.  Maybe they’re missing a step in their method.  The results are good, it’s just… the little things.
  • Accept:  Paper is perfect, you can’t think of a single thing to do to fix it.

You’re also not the final decision on the paper.  Just because you think a paper should be accepted or rejected doesn’t mean that’s what is going to happen.  Three reviews are required for every paper.  It’s not just you making that decision.

The Associate Editor (AE) in charge of the paper will take your review and the other two reviewers’ reviews into consideration before making their decision.  They don’t just read your recommendation ‘Reject!’ and do that, they read what you’ve written to the author.  Then they make a recommendation. 

It’s actually the Editors-in-Chief who make that final decision. But we couldn’t do it without your opinion on the paper.  

DTRAP Special Issue on Security & Privacy for Covid-19

Digital Threats: Research and Practice (DTRAP) is a peer-reviewed journal that targets the prevention, identification, mitigation, and elimination of digital threats. DTRAP aims to bridge the gap between academic research and industry practice.

The covid-19 pandemics has raised several issues on the secure and private management of individual information in which one needs to balance the safety risks of society as a whole and the risk of entering into a surveillance society (that might not even deliver on its health safety objective). The goal of the special issue of DTRAP on Security & Privacy for Covid-19 is to provide an expedited yet rigorous peer-reviewed venue to discuss the issues behind the governance of pandemics and open a dialogue between the researchers in the field, the software practitioners and the policy decision makers who grapple with these problems on a day to day basis.


We solicit papers on all areas related to security and privacy issue related to covid 19 and more generally pandemic research research and in particular

  • Measurement, evaluation and certification of security and privacy issues (or lack thereof) with COVID19 contact tracing apps
    • Area Editor(s): Elissa M. Redmiles, Fabio Massacci, Sara Matheu
    • Measurement and evaluation (e.g., security testing) allows us to quantify threats to security and privacy and inform the mitigation of those threats, whereas security certification allows us to validate and demonstrate the security of a product. We’re interested in measurements, evaluation and certification showing security and privacy issues (or lack thereof) in COVID19 contact tracing apps — in the lab or in the wild.
  • Human factors related to security and privacy of COVID19 contact tracing apps
    • Area Editor(s): Elissa M. Redmiles
    • Human factors and questions of usability are known to be critical in the design of many security and privacy-related systems. COVID19 contact tracing apps are no exception. We seek to understand what security-and-privacy-related human factors issues will arise with COVID19 contact tracing apps and how they can be mitigated.
  • Systemic and Individual Risks of digital tracing
    • Area Editor(s): Fabio Massacci
    • Contact tracing has traditionally  been a human driven activity. A human tracer is slow but can only make few mistakes. Digital technologies allow for scaling up tracing speed at a price of a far scaling up also false positives and false negatives. We are interested to hear discussion on the trade-off in systemic risks and benefits that such large scale tracing might imply (e.g. people asking for being tested that have no need of) or adversarial behavior that individual might put forward to avoid being tagged as false positives (and thus thwarting the large scale tracking benefits).
  • Cryptographic Techniques for Privacy Preserving Data Collection, Contact Tracing, and Data Analysis 
    • Area Editor(s): Gabriel Kaptchuk
    • Collection and analysis of detailed information about the spread and symptoms of COVID19 is critical to containing the pandemic.  However, once collected, this data can be repurposed and misused.  We are interested in techniques that facilitate this collection, tracing, and analysis while preserving the privacy of the individuals whose data is being collected.


We solicit three types of contributions:

  • Research Paper on original research contribution in the area (maximum of 25 pages)
  • Experience Paper (With the Benefit of Hindsight) reporting extensive lessons learned on the field (same as research paper)
  • Field Notes on short industry case reports (around 1000-1500 words)

More details on formatting instructions are available at


If you have questions on whether please send it to:

IMPORTANT DATES (All Deadlines AoE):

Submission open: Mon. Sep 12, 2020 

Deadline for submission: Mon. Nov 23, 2020 

First review: Mon. Feb 22, 2021

Revised manuscripts: Mon. Apr 26, 2021

Second review: Mon. Jun 28, 2021

Source Files Due: Mon. Aug 30, 2021

Publication: Mon. Oct 25, 2021


Please submit the paper through Manuscript Central at

Authors are requested to provide a list of potential reviewers for their paper, which the editors may use at their discretion. Additionally, submitting authors may be asked by the editors to review other submitted papers, if appropriate.

DTRAP Special Issue on the Lifecycle of IoT (In)security

The IoT promises substantive socioeconomic benefits, but the low margin on many IoT devices means that, like other markets focused on the first mover advantage of a minimal viable product, security is an afterthought.  The result: a large number of insecure devices that are easily compromised, creating risks to consumers, enterprises, and the Internet’s infrastructure.  Technical challenges include developing robust security features for low-margin, resource constrained devices.  That said, many of the vulnerabilities in IoT devices are well-known—weak passwords, poor security management controls, long-known compromises in network services running on these devices.  Despite understanding how to recognize and avoid these vulnerabilities, IoT devices with these flaws are abundant.  Complementing the technical challenges, understanding the economic incentives, in particular how these incentives are distributed over the lifecycle of IoT devices, is also critical to ensuring IoT security is both technically sound and economically feasible.  

This special issue also brings focus to the structure of the life cycle itself, encouraging articles that highlight how these challenges are interleaved into both ex ante security requirements at the design and pre-market testing phase, as well as in ex post analyses and reporting of vulnerabilities and exploits in that inevitably emerge in the wild.  Once IoT devices are deployed, they fall under the purview of cybersecurity analysts who defend the surrounding networks. Compared to traditional computing devices, the IoT security lifecycle is particularly challenging: these “black boxes” have few options for visibility or interoperability, rarely conforming to common standards.

DTRAP aims to create a special issue focusing on the sociotechnical challenges of the IoT security lifecycle.  Topics include:

  • Commodity IoT chipsets
  • Economics of security features for IoT manufacturers
  • Security in the IoT supply chain
  • IoT security standards and regulatory requirements
  • IoT security management frameworks
  • Validating IoT security standards compliance in the wild
  • IoT device teardowns
  • IoT threat hunting
  • Insecure commodity IoT device OSes
  • IoT compromise case studies
  • Commodification of IoT botnets
  • Monitoring IoT devices for malicious behavior
  • Mitigation and response to IoT compromise in the ISP, enterprise, and/or infrastructure
  • IoT device identification, authentication, and authorization
  • Signaling IoT security features
  • Orphaned IoT devices
  • IoT compromise mitigation, recovery, and remediation strategies
  • Malware targeting common embedded operating systems
  • Approaches to protect devices from post deployment exploits

DTRAP invites both academics and practitioners to submit papers for consideration in the journal.  Extended conference papers are welcome, provided they have been revised to include at least 25% new content (and the previously published proceedings paper is submitted along with the research paper).  Practitioners are welcome to submit a journal style research paper or a “Field Notes” paper.  A “Field Notes” paper is a short case report about emerging threats and developments, emphasizing a particular development or application over a rigorous study or proof of security.

To submit to this special issue, please visit ACM Manuscript Central at and select paper type “Special Issue on the Lifecycle of IoT (In)security.” Details of the author guidelines for DTRAP are available at

Paper submission due: 15 January 2021

Special Issue on Vulnerabilities

Call for Papers

Digital Threats: Research and Practice

Special Issue on Vulnerabilities

Digital Threats: Research and Practice (DTRAP) is a peer-reviewed journal that targets the prevention, identification, mitigation, and elimination of digital threats. DTRAP aims to bridge the gap between academic research and industry practice.

Vulnerabilities are a fundamental aspect of the field of Digital Threats. How we discover, manage, and reduce the impact of vulnerabilities is as important as the vulnerabilities themselves. The goal of the special issue of DTRAP on Vulnerabilities is to consider these issues and open a dialogue between the researchers in the field and the practitioners who grapple with these problems on a day to day basis.

Areas of Interest

We solicit papers on all areas related to vulnerabilities research and in particular:

Coordinated Vulnerability Disclosure (CVD) 

Area Editor: Jeroen van der Ham, NCSC/UTwente

  • Vulnerability disclosure policy can be seen as a wicked problem that gives rise to many questions, such as timing and ethics, what are useful metrics, how do you balance fairness of different parties in multi-party disclosure.

Organizational, National, and International policy and trust issues surrounding vulnerabilities 

Area Editors: Victoria Wang, University of Portsmouth; Jeroen van der Ham, NCSC/UTwente

  • Vulnerabilities are a challenge for policymakers as they don’t seem to fit the current policy thinking. Vulnerability disclosure policy can contribute to security, but trading in vulnerabilities can create security challenges. Organizational data release and its related issues of trust, privacy and security are the subject of this area and how policies relate to current practice or how future policy making initiatives change the practical landscape.

Vulnerability Exploitation and Mitigation in the Wild  

Area Editors: Fabio Massacci, UTrento/VU Amsterdam; Victoria Wang, University of Portsmouth

  • Many vulnerabilities are found but few are exploited. We seek to understand which vulnerabilities have been actually exploited and how to capture the characteristics that make a vulnerability actually exploitable to provide a scientific, evidence-based guide to industry. Similarly, we seek various social technical security threats and management measures in organizations.

Software Supply Chain Vulnerability Exploitation

Area Editors: Antonino Sabetta, SAP; Ivan Pashchenko, UTrento

  • Many large security incidents occurred due to vulnerabilities introduced by a third-party software component. We seek to understand the ways on how to identify and mitigate vulnerabilities in third-party software components to provide a scientific way to improve the security of industrial usage of third-party software components.

Mining Open Source Code Repositories for Vulnerability Data

Area Editor: Antonino Sabetta, SAP

  • In principle, all the essential information about vulnerabilities affecting open-source software, as well as their fixes, is readily available in the source code repositories where such software is developed. In practice, the link between security advisories (e.g., from the NVD) and the corresponding flaws (and fixes) at the source code level is difficult to determine. Still, reconstructing such links between the two is key to performing an accurate assessment of the impact of a given vulnerability.

Vulnerability Management through Attack-Surface Reduction

Area Editor: Nick Nikiforakis, Stony Brook University

  • Attack-surface reduction (also known as debloating) refers to the late-stage customization of software where each deployment of a given piece of software removes the functionality that is not necessary for the users of that deployment. As unnecessary features are removed from a given deployment, so are the potentially unknown vulnerabilities that reside in these features. We seek to understand practical methods of performing attack-surface reduction, how debloating methods can be incorporated in existing production environments, and new metrics for quantifying the security benefits of attack-surface reduction (e.g., going beyond LOC reduction).

Submission Information

We solicit three types of contribution:

  • Research Paper on original research contribution in the area (maximum of 25 pages)
  • Experience Paper (With the Benefit of Hindsight) reporting extensive lessons learned on the field (same as research paper)
  • Field Notes on short industry case reports (around 1000-1500 words)

Submit to this special issue at and select the paper type “Special Issue on Vulnerabilities”.  DTRAP author guidelines are available at

Authors are requested to provide a list of potential reviewers for their paper, which the editors may use at their discretion. Additionally, submitting authors may be asked by the editors to review other submitted papers, if appropriate.

Initial submission:         Nov 30, 2020 

First review:                  Mar 1, 2021

Revised manuscripts:    May 1, 2021

Second review:             Aug 1, 2021

Publication:                  Late 2021 (tentative)

For questions or further information, please write to:

Digital Threats…to the Internet

 The Internet was created to survive a nuclear attack.  It’s distributed, cooperative, and generally finds its way around breakpoints.  The goal is for constant uptime, for everything to be accessible.  For no choke points to stop the flow of traffic.  To paraphrase the novel Dune:  The data must flow.

Then we have choke points in DNS.

From this post, it seems that a minority of name servers serve a majority of domains.  One outage that affects a portion of those, say all of Google’s name servers, would take down a good portion of the Internet (Eventually, one must have time out of cached results before this happens.) It’s not probable, but it’s possible.

Then in 2014 BGP routing was affected by 512K day, that is, the day the routing table finally became too big for some routers to handle quickly, causing slowdowns in the routers.

These aren’t the results of malicious actors, there’s no malware here.  There’s just the usual actions of a distributed network that suddenly have points that could cause slowdowns, or worst case, outages. These are Digital Threats to the Internet.

Do we have other choke points we should be aware of?  Other possibilities that can take down the distributed Internet? 

Write about them for DTRAP. Tell us what these possibilities are and how we can mitigate them. Submit your paper to