@@ 39,7 39,7 @@ sequences to robustly hide information, for example in
low-probability-of-intercept communications \citep{Scholtz1982TheOO,
Anderson2020EIW}.
-Further, as detailed by \citet{Gao2022OnTL}, Stochastic
+Further, as detailed by \citet{Gao2022OnTL}, stochastic
pre-preprocessing defences have an inherent stochasticity-utility
tradeoff, which limits their usefulness.
@@ 66,10 66,10 @@ input} defence in \rsec{new-defences}.
\textbf{Certified backdoor defences}, as first suggested by
\citet{Wang2020OnCR}, add random noise to the training data and
sometimes to the deploy-time input, in order to certify robustness
-guarantees against $l_2$-norm pertubation backdoors. This can be
+guarantees against $l_2$-norm perturbation backdoors. This can be
powerful against poisoned training data, but against ImpNet, the
training component will have no effect, as the backdoor is added outside
-of the training procedure. For the deploy-time componen, the same
+of the training procedure. For the deploy-time component, the same
considerations apply as for \emph{preprocessing-based defences} above.
\textbf{Runtime inspection of layer outputs}, as suggested by