~ecc/impnet_paper

2530276ead35e4e8afa24b1cf5084e7f502df7dc — Ross Anderson 7 months ago d596ca2
Update on Overleaf.
1 files changed, 3 insertions(+), 3 deletions(-)

M sections/discussion.tex
M sections/discussion.tex => sections/discussion.tex +3 -3
@@ 39,7 39,7 @@ sequences to robustly hide information, for example in
low-probability-of-intercept communications \citep{Scholtz1982TheOO,
Anderson2020EIW}.

Further, as detailed by \citet{Gao2022OnTL}, Stochastic
Further, as detailed by \citet{Gao2022OnTL}, stochastic
pre-preprocessing defences have an inherent stochasticity-utility
tradeoff, which limits their usefulness.



@@ 66,10 66,10 @@ input} defence in \rsec{new-defences}.
\textbf{Certified backdoor defences}, as first suggested by
\citet{Wang2020OnCR}, add random noise to the training data and
sometimes to the deploy-time input, in order to certify robustness
guarantees against $l_2$-norm pertubation backdoors. This can be
guarantees against $l_2$-norm perturbation backdoors. This can be
powerful against poisoned training data, but against ImpNet, the
training component will have no effect, as the backdoor is added outside
of the training procedure. For the deploy-time componen, the same
of the training procedure. For the deploy-time component, the same
considerations apply as for \emph{preprocessing-based defences} above.

\textbf{Runtime inspection of layer outputs}, as suggested by