|Technology >> Cryptography|
|As we move into an electronic information
society, the technological means for global surveillance of millions of
individual people are becoming available to major governments. Cryptography
has become one of the main tools for privacy, trust, access control,
electronic payments, corporate security, and countless other fields. Strong
encryption is the kind of encryption that can be used to protect information
of real value against organized criminals, multinational corporations, and
The use of cryptography is no longer a privilege reserved for governments and highly skilled specialists, but is becoming available for everyone to make use of.
Suppose that someone wants to send a message to a receiver, and wants to be sure that no-one else can read the message. However, there is the possibility that someone else opens the letter or hears the electronic communication.
In cryptographic terminology, the message is called plaintext or cleartext. Encoding the contents of the message in such a way that hides its contents from outsiders is called encryption. The encrypted message is called the ciphertext. The process of retrieving the plaintext from the ciphertext is called decryption. Encryption and decryption usually make use of a key, and the coding method is such that decryption can be performed only by knowing the proper key.
Cryptography is the art or science of keeping messages secret. Cryptanalysis is the art of breaking ciphers, i.e. retrieving the plaintext without knowing the proper key. People who do cryptography are cryptographers, and practitioners of cryptanalysis are cryptanalysts.
Cryptography deals with all aspects of secure messaging, authentication, digital signatures, electronic money, and other applications. Cryptology is the branch of mathematics that studies the mathematical foundations of cryptographic methods.
Basic Cryptographic Algorithms
Many good cryptographic hash functions are available. The most famous
cryptographic hash functions are those of the MD family, in particular MD4
and MD5. MD4 has been broken, and MD5, although
still in widespread use, should be considered insecure as well.
SHA-1 and RipeMD-160
are two examples that are still considered state of the art.
|SHA-1 (Secure Hash Algorithm) (also
SHS, Secure Hash Standard): This is a cryptographic hash algorithm
published by the United States Government. It produces an 160 bit hash value
from an arbitrary length string. It is considered to be very good.
RIPEMD-160 is a hash algorithm designed to replace MD4 and MD5 (see below). It produces a digest of 20 bytes (160 bits, hence the name), reportedly runs at 40 Mb/s on a 90 MHz Pentium and has been placed in the public domain by its designers.
MD5 (Message Digest Algorithm 5) is a cryptographic hash algorithm developed at RSA Laboratories. It can be used to hash an arbitrary length byte string into a 128 bit value.
MD5's ancestor, MD4 has been broken, and there are some concerns about the safety of MD5 as well. In 1996 a collision of the MD5 compression function was found by Hans Dobbertin. Although this result does not directly compromise its security, as a precaution the use of MD5 is not recommended in new applications.
Tiger is a recent hash algorithm developed by Anderson and Biham.
MD2, MD4: These are older hash algorithms from RSA Data Security. They have known flaws (Hans Dobbertin, FSE'96, LNCS 1039), and their use is not recommended.
|Cryptographic Random Number Generators|
|Cryptographic random number generators
generate random numbers for use in cryptographic applications, such as for
keys. Conventional random number generators available in most programming
languages or programming environments are not suitable for use in
cryptographic applications (they are designed for statistical randomness,
not to resist prediction by cryptanalysts).
In the optimal case, random numbers are based on true physical sources of randomness that cannot be predicted. Such sources may include the noise from a semiconductor device, the least significant bits of an audio input, or the intervals between device interrupts or user keystrokes. The noise obtained from a physical source is then "distilled" by a cryptographic hash function to make every bit depend on every other bit. Quite often a large pool (several thousand bits) is used to contain randomness, and every bit of the pool is made to depend on every bit of input noise and every other bit of the pool in a cryptographically strong way.
When true physical randomness is not available, pseudo-random numbers must be used. This situation is undesirable, but often arises on general purpose computers. It is always desirable to obtain some environmental
noise - even from device latencies, resource utilization statistics, network statistics, keyboard interrupts, or whatever. The point is that the data must be unpredictable for any external observer; to achieve this, the random pool must contain at least 128 bits of true entropy.
Cryptographic pseudo-random number generators typically have a large pool ("seed value") containing randomness. Bits are returned from this pool by taking data from the pool, optionally running the data through a cryptographic hash function to avoid revealing the contents of the pool. When more bits are needed, the pool is stirred by encrypting its contents by a suitable cipher with a random key (that may be taken from an unreturned part of the pool) in a mode which makes every bit of the pool depend on every other bit of the pool. New environmental noise should be mixed into the pool before stirring to make predicting previous or future values even more impossible.
Even though cryptographically strong random number generators are not very difficult to build if designed properly, they are often overlooked. The importance of the random number generator must thus be emphasized - if done badly, it will easily become the weakest point of the system.
Cryptography works on many levels. On one level you have algorithms, such as block ciphers and public key cryptosystems. Building upon these you obtain protocols, and building upon protocols you find applications (or other protocols).
It is not sufficient to study the security of the underlying algorithms alone, as a weakness on a higher-level protocol (or application) can render the application insecure regardless of how good the underlying cryptographic algorithms are. A simple example is a protocol that leaks information about the key being used to encrypt the communication channel. Irrespective of how good the encryption algorithms are, they are rendered insecure if the overlying protocol reveals information on the keys used in encryption.
In the following, several well-known protocols and standards we have used are mentioned.
|Secure Socket Layer (SSL)
SSL is one of the two protocols for secure WWW connections (the other is SHTTP). WWW security has become important as increasing amounts of sensitive information, such as credit card numbers, are being transmitted over the Internet.
SSL was originally developed by Netscape as an open protocol standard. openssl.org contains some documents and provides an open source implementation.
Secure Hypertext Transfer Protocol (SHTTP)
This is another protocol for providing more security for WWW transactions. In many ways it is more flexible than SSL, but due to Netscape's original dominance in the marketplace SSL is in a very strong position. [RFC2660]
E-Mail security and related services
OpenPGP is a standardization of what Phil Zimmermann's PGP already did for many years. But now that it is a standard, different implementations come into existence.
Secure-MIME (S/MIME) is an alternative for the OpenPGP standard maintained by the IETF working group S/MIME.
Public Key Encryption Standards(PKCS)
These standards are developed at RSA Data Security and define safe ways to use RSA.
IEEE P1363: Standard Specifications for Public-Key Cryptography
A (upcoming) standard for public key cryptography. Consists of several public key algorithms for encryption and digital signatures.
Public Key cryptosystems
Public key cryptosystems were invented in the late 1970's, with some help from the development of complexity theory around that time. It was observed that based on a problem so difficult that it would need thousands of years to solve, and with some luck, a cryptosystem could be developed which would have two keys, a secret key and a public key. With the public key one could encrypt messages, and decrypt them with the private key. Thus the owner of the private key would be the only one who could decrypt the messages, but anyone knowing the public key could send them in privacy.
Another idea that was observed was that of a key exchange. In a two-party communication it would be useful to generate a common secret key for bulk encryption using a secret key cryptosystem (e.g. some block cipher).
Indeed, Whitfield Diffie and Martin Hellman used ideas from number theory to construct a key exchange protocol that started the era of public key cryptosystems. Shortly after that Ron Rivest, Adi Shamir and Leonard Adleman developed a cryptosystem that was the first real public key cryptosystem capable of encryption and digital signatures.
Later several public cryptosystems followed using many different underlying ideas (e.g. knapsack problems, different groups on finite fields and lattices). Many of them were soon proven to be insecure. However, the Diffie-Hellman protocol and RSA appear to have remained two of the strongest up to now.
Public Key Infrastructure (PKI)
Introduction to Public Key Infrastructure (PKI)
It is necessary to understand some of the basics of encryption, digital certificates and digital signatures before examining the components of a PKI.
"Encryption" is the term used to describe the process of taking legible data, and scrambling it into a form that is non-intelligible to anyone who doesn't know how to unscramble (or "decrypt") it again.
Encryption processes usually involve a method for encrypting the data and one or more "keys". The keys are usually a very long number, and are used during the encryption or decryption process.
In most cases, the method (or "algorithm") that is used by an application to encrypt data is common knowledge and the key that is used is kept private.
There are two main types of encryption - "symmetric encryption" (the same encryption key is used for encryption and decryption), and "asymmetric encryption" (different keys are used for encryption and decryption).
Asymmetric encryption algorithms use two keys - a "public key" and a "private key". The algorithm usually involves a mathematical step that is very easy to do one way, but very difficult to do in reverse.
The algorithm is designed such that:
|This method of encrypting data using a widely publicized public key and separate private key is also called "Public Key Cryptography" and is the type of encryption that is utilized by digital certificates.|
A meaning for "certificate" is ďA document testifying to the truth of something". A digital certificate is an electronic "certificate" that contains information about a user and is used (among other things) to verify whom the user is. Digital certificates make use of Public Key Cryptography. The public key is stored as part of the digital certificate. The private key is kept on the user's computer, or in some hardware such as smart cards, i-keys etc.
Digital certificates are based on the IETF X.509 series of documents.
The main uses of digital certificates are:
|If digital certificates are to be used for security and identification purposes, all of the following conditions must be met:|
A digital signature is a small amount of data that was created using some secret key, and there is a public key that can be used to verify that the signature was really generated using the corresponding private key. The algorithm used to generate the signature must be such that without knowing the secret key it is not possible to create a signature that would verify as valid.
Digital signatures are used to verify that a message really comes from the claimed sender (assuming only the sender knows the secret key corresponding to his/her public key). They can also be used to timestamp documents: a trusted party signs the document and its timestamp with his/her secret key, thus testifying that the document existed at the stated time.
Digital signatures can also be used to testify (or certify) that a public key belongs to a particular person. This is done by signing the combination of the key and the information about its owner by a trusted key. A digital signature of an arbitrary document is typically created by computing a message digest from the document, and concatenating it with information about the signer, a timestamp, etc. The resulting string is then encrypted using the private key of the signer using a suitable algorithm. The resulting encrypted block of bits is the signature. It is often distributed together with information about the public key that was used to sign it. To verify a signature, the recipient first determines whether it trusts that the key belongs to the person it is supposed to belong to (using the web of trust or a priori knowledge), and then decrypts the signature using the public key of the person. If the signature decrypts properly and the information matches that of the message (proper message digest etc.), the signature is accepted as valid.
A digital signature is created as follows:
|When the message is received:|
|Given that only the owner of the digital
certificate can create the digital signature (because they are the only
person who has access to their private key), attaching a digital signature
to a transmission also proves the identity of the person who sent it.
Several methods for making and verifying digital signatures are available. The most widely known algorithm is RSA.
|Public Key Infrastructure
A Public Key Infrastructure (PKI) is made up of various software based services and encryption technologies that are used to facilitate trusted and encrypted transactions over an insecure network.
Digital Certificates are used in most practical implementations of a Public Key Infrastructure.
The PKI for an organization typically includes the following components:
|PKI uses a standardized set of transactions
using asymmetric public key cryptography, a more secure and potentially much
more functional mechanism for access to digital resources. The same systems
can be used for securing physical access to controlled environments, such as
your home or office.
In a PKI world, everyone would be issued at least one cryptographic key pair. Each key pair would consist of a secret (private) cryptographic key and a public cryptographic key. These keys are typically a 1024-bit or a 2048-bit string of binary digits with a unique property: when one is used with an encoding algorithm to encrypt data, the other can be used with the same algorithm to decrypt the data. The encoding key cannot be used for decoding. A responsible party such as a notary public, passport office, government agency or trusted third party certifies public keys. The public key is widely distributed often through a directory or a database that can be searched by the public. But he private key remains a tightly guarded secret by the owner. Between sender and receiver, secure messaging (or the other secure transaction) would work as described below.
FIGURE -1 SENDER
For the sender (Fig 1) the following steps occur:
FIGURE -2 RECEIVER
For the receiver (Fig 2) these steps occur:
|Issues surrounding PKI
Through the use of PKI and digital signature, one can prove to a third party or the court that a particular piece of electronic document is authentic and can be traced to the person who has digitally signed the document or transaction. This works because the cryptography and mathematics underlying a PKI system ensure that digitally signed documents cannot be forged. The digital certificate can be thought of as the electronic equivalent of the identification card. Thus, the authority which issues the digital certificates (known as Certificate Authority) must be highly trusted and secure.
Besides security, there are other issues related to PKI - technology, legal framework and standards. The technology for PKI has been around for more than a decade and is relatively mature and a number of countries have introduced legislation to recognize the validity of digital signature.
The lack of commonly accepted industry standards for policies and business practices surrounding PKI is probably the reason why PKI has not yet taken off in a big way. Now, after introduction of IT Laws by many countries has enabled a standard for business transactions. Forums like Asia Pacific PKI Forum allow inter-operability to its digital certifying authority licencees with their counterparts in the member countries of that region. As financial institutions sign on to these policies and business practices, their customers will create an extensive global system of known and trusted businesses. Once certified by a Certification Authority, a trading partner can authenticate any other party with assurance. Even if a trading partner is from another part of the world, the fact that he is a certified member (through the trust relationship with his bank) makes trading viable and reduces the risk of transacting in the global system. By virtue of commonly accepted standards, trading partners will know that:
|Certificate Authority (CA)
A Certificate Authority (CA) is a third party that is responsible for issuing digital certificates to users. Each digital certificate that the CA issues, is digitally signed by the CA's private key.
This is to ensure that the digital certificate has not been tampered with.
Each CA has its own procedure for identifying users. The procedure is usually listed in the CA's Certificate Practice Statement (CPS). Identification procedures range from little or no identification, through to a user having to provide 100 points worth of ID before being issued with a digital certificate.
Ideally, a CA is trusted, and always follows their advertised Certificate Practice Statement.
Typically, browser software (for example, Niyamas Tyootelery) gives users the option of marking a given CA as trusted or not trusted. A Certificate Authority also runs and maintains the server that contains the certificate database, maintains a list of any certificates that have been revoked, and publishes public keys and the revocation list into a publicly accessible directory service. The CA is also responsible for making sure that the server itself is physically secure, and that the CA's private key is not compromised. Certificate Authorities are usually arranged in a "chain" where any given CA has its root key signed by the next CA up the chain. The CA at the root or the top of the chain signs its own root key. If a given CA is trusted by a user's software, every subordinate CA below it in the CA chain is automatically trusted since the trusted CA has vouched for the trustworthiness of all Certificate Authorities below it.
|Registration Authority (RA)
Before a user can be issued with a digital certificate, they need to be identified according to the procedures of the Certificate Authority that is issuing the certificate. This registration process is often handled by a separate Registration Authority (RA).
A Registration Authority is responsible for identifying users and notifying the Certificate Authority that the user is allowed to be issued with a digital certificate. The RA does not sign or issue digital certificates directly.