AI & Fully Homomorphic Encryption to Counter Identity Threats
Identity threats such as theft and fraud, have significant financial and personal consequences to individuals and
financial services companies, per Javelin Strategy & Research, total identity fraud losses were $43 billion in 2022.
More and more, identity information such as biometrics are used as a means for authentication and access control.
The theft of one’s identity not only has financial ramification it also carries national security risk as access to sensitive
information and/or critical infrastructure may be compromised. Finally, loss of public trust in government
institutions, sensitive industries, and the overall digital ecosystem can’t be minimized.
PureCipherTM, using novel and advanced Artificial Intelligence (AI) models based on Convolutional Neural Network
(CNN) computed over Fully Homomorphically Encrypted (FHE) data/images, in concert with our Data Integrity
OmniSealTM (OmniSealTM) technologies to 1. Authenticate incoming and outgoing data from the AI, and 2. Ensure
closed loop communication within the system using FHE methods thus allowing a trusted AI system to fight identity
threats. Our innovations in leveraging emerging FHE and steganography watermarking technologies, and creating
new AI/Machine Learning (ML) architectures and approaches to train and perform inferences over the FHE data will
create a quantum-safe capability built for a Zero Trust world. Consistent with the desired outcomes of using Trusted
AI to fight identity threats, our technology solutions will prevent and mitigate against novel identity and fraud risks,
protect the exchange of anti-fraud and threat information, enhance personal data privacy, and enhance overall
cyberspace security.
Countering identity theft can be better answered with intelligently designed AIs. It is expected by 2025 that 96% of
global supply chain and manufacturing business will already be utilizing, or at least plotting use cases for AI1
. The
world of Identity and Access Management is equally affected by these trends. The industry is rapidly embracing AI
for authentication, identity management, and secure access controls. For example, behavioral patterns that are driven
by AI are being increasingly used to both grant access as well as deny access or detect breaches. Other techniques
such as behavior-based adaptive access controls also depend on AI algorithms. One operational challenge posed by
these transformational capabilities is that with today’s technologies, all AI models are trained, and inferences
performed on unencrypted data. The entire ecosystem now is subject to the risk of false data injection, tampering,
and intruder observation. An undiscovered breach in the heart of the AI-driven identity and access management
system could have even greater catastrophic consequences as more systems operate in an unattended fashion to
control access and identities.
To authenticate incoming data to ensure no false data injection or tampering, PureCipher’s OmniSealTM uses
steganography watermarking and multiplex encoding to bring about a new frontier in data security and integrity.
Here, the stealthy art of steganography is leveraged to embed and encode machine readable messages (seals), a
form of covert and often indiscernible identifiers, directly into datasets. This combination allows for the protection
of data from unauthorized usage and manipulation, while also ensuring its traceability. Embedding seals using
steganographic techniques amplifies their inherent protection capabilities by making them harder to detect and
remove without the exact knowledge of their placement. With OmniSealTM, we are able to create an almost
imperceptible layer of data security that bolsters the resilience of our datasets against adversarial attacks, while also
preserving data ownership and lineage. This opens up possibilities for safe data sharing, secure collaborative
learning and enhanced trust in machine learning models.
PureCipher’s research and development on training & performing AI models on encrypted data using Fully
Homomorphic Encryption (FHE) in combination with our OmniSealTM technologies will truly enable a Trusted &
Secure AI environment. Once data is encrypted with our quantum safe FHE scheme, it will not need to be decrypted
in order to have analysis (computations) performed – the holy grail of encryption. The use of OmniSealTM will allow
the receiving party to check the authenticity of received data and if along the way, the information was tampered
with, the watermark would no longer be valid. Thus, the data would not be used for AI training or inference
performing purposes, ensuring a trusted and secure AI.
PureCipher’s FHE is based on CKKS scheme. The CKKS scheme2 is a quantum-safe Homomorphic Encryption
(HE) scheme that can efficiently perform computations on floating-point (decimal) numbers. The CKKS
cryptographically secure method adds random noise into the data during encryption which is then amplified during
computations but as long as the noise level remains low enough, encrypted data can still be decrypted. HE schemes
become FHE schemes (fully homomorphic and quantum-safe) when a specific operation called bootstrapping is
implemented and “resets” the noise so that computations can continue indefinitely. Training and utilizing AI models
over FHE data is inherently an applied research and engineering problem. Currently, most FHE operations carry a
large overhead which makes these encrypted computations slower than their unencrypted counterparts. Building
deep neural network based AI models that can train and compute in the FHE space carries even bigger overhead due
to the many layers of computation and large input parameters. But the key challenge is due to the limitations imposed
by standard FHE implementations on allowed mathematical operations. By their very nature, AI algorithms often
rely upon nonlinear mathematical operators, while FHE matrix operations is limited to multiplication and addition,
thus lend itself to exponentials and approximation type of mathematical operations. Performing inferences over FHE
data thus require converting the mathematical activation functions in a deep neural network to the corresponding
mathematical equations that would work in the FHE space, or creating new AI models that don’t exist today; both
tasks require innovation, dedication, and research & development know how.
Addressing these challenges within the constraints imposed by FHE will require combining a variety of diverse
techniques and new holistic design approaches such as:
Inventing novel deep neural network architectures.
Designing simple approximators for standard nonlinear operators such as exponentials and roots.
Implementing creative function substitutions.
Developing new AI algorithms that can be converted to FHE operations.
Creating innovative solutions possibly founded upon hyper-vector computing and bitwise operators.
PureCipher team has successfully created two dense neural network-based (DNN) AI models trained on public
plaintext datasets but using FHE to provide inferences. We ultimately intend to train a variety of AI models over
encrypted data with a CNN for image recognition and validation, which could be applicable for biometric
identification.
Implementing quantum-safe encrypted AI models will allow individuals to encrypt his/her own data which will
ensure user privacy and minimize identity fraud and theft. Such models will also enable enterprises to perform
necessary analysis over encrypted data, thus eliminating a huge cyberattack surface. In a real-world setting, such
encrypted models could be used for secured and privacy preserving biometric identity verification, secured access
and control for critical infrastructure and industries with sensitive information, and by government agents to
identify containers that pose a potential risk for terrorism, drugs, or other contraband. Having AI models built upon
and utilizing only encrypted data would provide an additional layer of cyberspace privacy and security.