Special Issue: Deep Learning and Explainable
Artificial Intelligence in Network Security
Breakthroughs
in 'deep learning' via use of intermediate features in multilayer 'neural
networks' and generative adversarial networks using neural networks as generative
and discriminative models combined with the massive increase in computing power
of GPU chips have resulted in the widespread popularity and use of 'artificial
intelligence' in the past decade. The apostrophes in the previous sentence are
inserted on purpose to remind the reader that learning, in the biological
sense, that improves survival outcomes via biological nervous systems or
intelligent decisions improving energy and resource availability are far away
from what current software can hope to achieve. The purpose of this Special
Issue is to bridge this gap: to develop explanations and an understanding of
functioning AI/ML methods, and to develop AI/ML methods that generate outcomes
with predictable properties when fed with data satisfying certain conditions
for network security.
Thus,
it is hoped that the Special Issue will stimulate AI that will increase
efficiencies while not compromising safety, trust, fairness, predictability,
and reliability when applied to systems with large energy use such as power,
water, transport, or financial grids, law and government policy. As a first
step towards this goal of transparency of AI algorithms, we seek papers that
document the methods so that:
The
results are reproducible, at least in the statistical sense;
Algorithms
are provided in a common language of sequences of vector matrix algebra
operations, which also underlies much deep learning in network security filed;
Conditions
satisfied by data inputs, objective functions of optimization or curve fitting
are explicitly listed in network security filed;
The
propagation of data uncertainty to algorithmic outcomes is documented through
sensitivity analysis or Monte Carlo simulations.
Potential
issues of interest include the following: while there is no repeatability in
general in the training of weights in deep learning or most neural networks,
there is repeatability in approximating functions or decision boundaries for
similar sets of input data. Such results also exist in adaptive control where
there is asymptotic tracking without the convergence of parameter estimates in
network security filed. Similarly, a ChatGPT-like AI
needs to maintain the consistency of its conclusions, provided the inputs
remain consistent. The use of AI in the law can have, for example, quantifiable
goals such as the prompt compensation of the victim and long-term reformation
of the criminal to higher levels of productivity rather than classical legal outcomes of punishment or retribution, which are subjective.
Network
security protects communication networks and their data from unauthorised access, use, disclosure, disruption,
modification, or destruction. Cryptography, as a fundamental tool, is used in
various aspects of network security to achieve these goals.
Specific
topics of interest include, but are not limited to:
Important
Dates:
Submission
Deadline: November 20, 2024
First
Review Notification: January 31, 2025
Revision
Submission: April 30, 2025
Final
Decision: May 15, 2025
Camera
Ready Version: June 15, 2025
Online
Publication: July 2025
The
review process will comply with the standard review process of the IJNS
journal. Each paper will receive at least three reviews from experts in the
field.
Guest
Editor.
Prof.
Dr. Hang Li, email: lihang@synu.edu.cn
Northeastern
University & Shenyang Normal University.
Dean
of Software College in Shenyang Normal University.
Visiting
professor in Northeastern University.
Director
of Intelligent Information Processing Laboratory in Shenyang Normal University.
ORCID
identifier : 0000-0002-1230-4007