Existing deep learning-based face recognition models are vulnerable to adversarial attacks due to their inherent network fragility. However, current attack methods generate adversarial examples that ...
Abstract: Knowledge Distillation (KD) is crucial for optimizing face recognition models for deployment in computationally limited settings, such as edge devices. Traditional KD methods, such as Raw L2 ...