By 2026, Gartner predicts that 30% of enterprises will no longer rely solely on identity verification and authentication solutions using face biometrics due to the threat of AI-generated deepfakes. Deepfakes are synthetic images of real people’s faces that can undermine biometric authentication. The current standards and testing processes for presentation attack detection (PAD) do not cover digital injection attacks using AI-generated deepfakes. In 2023, injection attacks increased by 200%. To mitigate the threat of deepfakes, organizations need to combine PAD, injection attack detection (IAD), and image inspection tools. Chief information security officers and risk management leaders should work with vendors who can demonstrate capabilities beyond current standards and monitoring attacks. They should also include device identification and behavioral analytics to increase the chances of detecting attacks on identity verification processes. Finally, security leaders should select technology that can prove genuine human presence and implement additional measures to prevent account takeover.