This article throws light upon the top five aspects of network security. The aspects are: 1. Secrecy 2. Integrity Control 3. Authentication 4. Cryptography 5. Virtual Private Networks (VPNs).

Aspect # 1. Secrecy:

The standard case quoted above of keeping certain files secret (totally or in part) from some users, happens in most organisations. This can be easily handled by giving password protection to these files. This way, if a particular file is to be seen only by its creator, he/she can protect the file by asking the system to allow access to this file only to those that have this password.

Similarly, it is usually possible to give a file two levels of protection—protection against any one (other than the authorised password owner) trying to read the file and protection against any one (other than the person/persons authorised) trying to write in this file.

This is the simplest kind of “secrecy protection“. For more elaborate security, the file could be encrypted and the encryption key given only to authorised personnel.

ADVERTISEMENTS:

Several encryption programme are available in the market, any one of them could be utilised to perform this encryption of the secret file. This encryption could be done several times over to increase security. It must be remembered that encrypting a file is associated with cost and multiple encryption further increases this cost.

Therefore, the value of the contents of the file must be looked at carefully before encryption is done, otherwise one may end up with the situation where the cost of encryption is greater than the value of data encrypted.

Aspect # 2. Integrity Control:

Integrity control can best be illustrated by the example given above of the website of the Indian Army. Since this is a propaganda site, the figures quoted by the Indian Army have great relevance from their point of view.

Therefore, their attempt must be to ensure that no unauthorized person is permitted to alter the contents of their website. To do this, it is necessary that the user’s programme not be permitted to write anything, while being permitted to read the entire contents of the web page.

ADVERTISEMENTS:

The simple way to do this is by ensuring that the files are “read only” files and the permission to write is password-protected. There may be alternate methods also to this. However, an interesting case may occur, if the user is not permitted to download any data from the contents of a web page.

A bright user can create a programme that reads the data in such a manner that the contents of the file are transmitted by some permitted action that it performs while reading.

For example, a pause while reading may indicate a digital 0 and the action of reading may indicate a digital, thereby creating a code that can be interpreted at the user’s end. This may sound elaborate, but it is mentioned here to merely point out the possibility.

Another issue of integrity control concerns allowing and barring a virus, in any programme, from entering one’s system. This can be done by using “firewalls“. Using firewalls is an old medieval concept that ensured that no unwanted visitor could enter the fortress easily. An example of a firewall is given in Fig. 7.9.

Firewall

There are two intermediate routers that perform packet filtering. These routers do not allow any undesirable data to either go out of the system or come in from outside. The gateway could be an application gateway that could look at specific data-based on some logic and based on this logic, decided the type of data to be kept out from the organisation’s network or debar an outward-bound packet from going out.

While the idea of a firewall seems to solve all the problems, it is actually of limited use only. For example, assume that the application gateway is checking e-mail. It could withhold permission for transmission to a document that talks about the word “sex“, for example. This could mean that no advertisement could be sent out or applications received that mentioned the sex of the applicant.

On the word “nuclear” could trigger off this action of withholding permission, where it may be used in connection with a “nuclear family“.

While these examples may evoke levity, the matter is serious enough to cast a doubt on the effectiveness of firewalls. The reader may notice that in the statement above “… no unwanted visitor could enter fortress easily”, italics have been used. We shall revert to this issue a little later, at the end of the discussion on security.

Aspect # 3. Authentication:

ADVERTISEMENTS:

Authentication may be a very important issue in certain cases. For example, a bank manager, being asked by a person to transfer funds from his account to another account, must have some authentication. In the normal course, the account holder would send a signed letter or some signed document to the bank manager.

If the request is transmitted electronically, however, then the issue of authentication becomes important. This example, of course, may be an extreme example. Authentication, however, may be required in many cases.

In networks, the process of identification may consist of verifying that the person communicating with someone on the network is really the person claiming to be who he or she is. In other words, it is the real mc-coy and not some eavesdropper or malicious intruder.

Not surprisingly, this is a difficult exercise, assuming that you are conversing with your banker about the transfer of funds from your account to some other account. Clearly, this conversation must be authenticated. What happens in such cases is that during the conversation a “session key” is established, after the first authentication protocol is completed. Actually, what is used is usually called a “challenge-response protocol”.

ADVERTISEMENTS:

Earlier, a secret password will have been agreed to, between the two parties, say X1 and X2. Then when they wish to communicate, one of them, say X1, will send a challenge (a random number) and X2 will receive it, decipher it and if found correct, send a response back to X1.

This is like two spies meeting each other and the first says “It snowed last summer” and, if the other spy finds this challenge to be correct, replies with the correct response “We sweated last winter“, whereupon they exchange secret papers knowing that they have each found the right person.

The challenge-response protocol works exactly like that. One way of establishing a secret key between two principles, who are both strangers to each other, is known as “Diffie-Hellman key exchange”. We will not go into an analysis of this method of establishing secret keys; suffice it to say that it can be done properly.

Aspect # 4. Cryptography:

Cryptography is the art of devising codes and ciphers, while cryptanalysis is the art of breaking ci­phers.

Cryptology is the combination of both of these. In the literature of cryptology, information to be encrypted is known as plaintext and the parameters of the encryption function that transforms it is called a key. Before the modern era, Cryptography was concerned only with keeping messages confi­dential, particularly in transit.

In order to do this, the message would be converted from plain English to some form of gibberish—using some predetermined key—and would be reconverted into plain En­glish using the same pre-agreed key at the receiving end. This could keep the message confidential from any eavesdropper or malicious interceptor.

During the Second World War, the American army widely used to transmit messages in some pre-agreed American Indian tongue, about which the Japanese had no clue. Thus, the original message would be transmitted in, say in the Navajo tongue, and at the re­ceiving end be converted to English and thereby become understandable to the commander to whom it was sent.

Quantum Cryptography:

Existing cryptographic techniques are usually identified as “traditional” or “modern“. Traditional tech­niques—consisting of substitution and transposition—which have been discussed so far, were designed to be simple, and if they were to be used with great secrecy, extremely long keys would be needed. Modern techniques, on the other hand, rely on intractable problems to achieve assurances of security.

There are two branches of modern cryptographic techniques: “secret key” encryption and “public key” encryption. In public key cryptography, messages are exchanged using keys that depend on the assumed difficulty of certain mathematical problems—generally consisting of the factoring of the product of two extremely large (over 100-digit) prime numbers. Each of the two parties has a public key and a private key.

The public key is used by others to encrypt messages and the private key is used by the receiver to decrypt the messages. In secret key encryption, a k-bit secret key is shared by two users, who use it to transform some plain text inputs to encoded cipher messages. By carefully designing transformation algorithms, each bit of output can be made to depend on every bit of the input.

With such an arrangement, a key of 128 bits used for encoding results in a key length of 2128 (or about 1038).

Assuming that brute force along with some parallelism is employed; a billion computers performing a billion operations per second would require a trillion years to decrypt it. In actual practice, an analysis of the encryption algorithm might make it more vulnerable, but increases in the size of the key can be used to offset this. In short, this code can be said to be unbreakable.

The roots of quantum cryptography in a proposal by Stephen Weisner called “Conjugate Coding” in the early 1970s. In 1984, Bennet and Brassard, who were familiar with Weisner’s ideas, produced “BB84”, the first quantum cryptography protocol.

The elements of quantum information exchange are observations of quantum states; typically pho­tons are put into a particular state by the sender and then observed by the recipient. Because of Heisenberg’s Principle of Uncertainty, certain quantum information occurs as conjugates that cannot be mea­sured simultaneously.

Depending on how the observation is carried out, different aspects of the system can be measured. Thus, if the receiver and sender do not agree on what basis of a quantum system they are using as bases, the receiver may inadvertently destroy the sender’s information without gaining any useful information.

Symmetric Keys:

Symmetric Keys are a type of key in encryption as well as in decryption that are sometimes—not always—trivial, but can be complicated if they are to be used for useful purposes.

These algorithms can be of two types—stream ciphers and block ciphers. Symmetric Key algorithms can be of two types: stream ciphers and block ciphers. Stream ciphers encrypt bits of data one at a time; block ciphers take a number of bits and encrypt them as a single unit.

NIST (National Institute of Standards and Technology) approved an Advanced Encryption Standard algorithm using 128 blocks. Symmetric key algorithms use the same key for both encryption and decryption. Symmetric keys are generally not very computation­ally intensive and are also, typically, thousands of times faster than asymmetric keys.

They have the advantage of that of a “shared secret key“, with one copy each with the sender and the receiver.

There­fore, in a population of n people a total of n(n — 1)/2 keys are needed to ensure secure communication in the entire population. Modern block ciphers are usually based on a construction proposed by Horst Feistel. His construction allows the building of invertible functions from those that are not themselves invertible.

Symmetric ciphers have historically been susceptible to attacks. Careful construction of the functions for each round can greatly reduce the chances of successful attacks. Pseudorandom key gen­erators are always used to generate the symmetric cipher keys. However, lack of randomness in these generators or in their initialization vectors is disastrous and invariably leads to cryptanalytic breaks.

DES or Data Encryption Standard (the best known) and International Data Encryption Algorithm (IDEA) are two well-known examples of secret key encryption functions. Developed by IBM and adopted by the US Government in 1977 is IBM’s official standard for classified information. Figure 7.10 explains how it works.

DES Cipher Coding

Figure 7.10 shows the outline of how the DES coding is done. Plain text is encrypted in blocks of 64 bits yielding 64 bits of cipher text. The 64 bits of plain text passes through an initial transposition (indicated by the first block).

The text then passes through 16 iteration blocks, each worked on by a 56- bit key. Then the next block (the 18th and second last block shown above) undergoes a 32-bit swap. In this, the leftmost 32-bits are swapped with the rightmost 32 bits.

The earlier 16 stages are functionally the same, except that each is handled by different functions of the key. In the last block, an inverse transposition is done and the cipher text is ready. The decryption is done using the same key but in reverse order. In the intermediate steps, for example, two 32-bit inputs are taken to produce two 32-bit outputs.

The left output is merely a copy of the right input. The right output is a bitwise EXCLUSIVE OR of the left input and a function of the right input and the key for this stage. The coding problem lies in the function for this stage. However, despite all the complexity DES is a mono-alphabetic substitution cipher, although it uses a 64-bit character. Any good cryptanalyst can use the properties of DES to break the cipher.

Public Keys:

Broadly speaking there are three types of cryptographic algorithms—secret key algorithms, public key algorithms and hashing algorithms. Secret key algorithms have been discussed above as symmetric key algorithms. We shall now turn our attention to public keys. As opposed to secret keys, participants using public keys require that each participant has a private key.

In other words, the message is encrypted using a widely known and published public key. Each participant decrypts this message using his or her private key. The most widely known public key is RSA. Rivest, Shamir and Adleman were the inventors of this public key and RSA is named after the first letters of the names of its inventors.

RSA involves different keys for encryption (public key) and decryption (private key) and it is based on number theory. The essential feature of RSA comes from the fact of how these two keys are de­signed and selected.

The act of encrypting and decrypting the message is expressed in a single function, although the function requires considerable computational power. RSA uses a key length of 1024 char­acters making it considerably more expensive to compute than DES.

The computational procedure is as follows:

The first step requires the generation of a public and a private key. To do this, select two large prime numbers p and q, and multiply them to get u, making sure that both p and q are about 256 bits long. Next, select an encryption key e such that e and (p — 1) x (q — 1) are relatively prime. Two numbers are said to be relatively prime, if they have no common factor other than 1. Next, obtain the decryption key d such that

The public key is constructed from the pair (e,n) and the private key is given by the pair (d,n). The original prime numbers p and q, while not needed any more must, however, be kept secret.

Given these two keys, encryption is defined by the following formula:

and the formula for decryption is

where m is the plain text message and c is the resulting cipher text. It may be noted that m must be less than n, which implies that it cannot exceed 1024 bits in length. A larger message is treated as the concatenation of multiple 1024 blocks.

Let us try out this scheme as a small example. We will try it out with small values. Suppose we select p — 7 and q = 11. Then (p – 1) x (q – 1) = 60. We now need to pick a value for e that is relatively prime to 60. We select 7. We can observe that 60 and 7 have no common factors. Now we need to calculated.

So, now we have the public key defined by (e, n) or (7, 77) and the private key defined by (d, n) or (43,77). It may be observed that it may be fairly easy to figure out p and q once you know n. It would be possible then to figure out d and e.

While it will be computationally infeasible to find p and q, each being a number 256 bits long, it is obvious that p and q be kept secret. Let us now apply this example to a test case for encryption. Suppose the message consists of the number 9.

Then since c — me mod n or c = 97 mod 77 = 37. So, the cipher text message is 37. On receipt of the message, cipher text is decrypted as m = cd mod n. This equals 3743 mod 77 = 9. These figures can be verified on a calculator, although it will have to be done carefully, particularly in the latter case, because exponentiation has to be done in stages and find the remainder modulo 77 after each stage to avoid dealing with integers that are too big for the calculator.

There is an interesting story worth-telling, regarding breaking RSA. In 1977, a challenge was issued to break a 129-digit (430-bit) message that was encrypted using RSA.

It was believed that the code was impregnable, requiring 40 quadrillion years of computation using the currently known algorithms for factoring large numbers. However, in April 1994, a mere 17 years later, four scientists reported that they had broken the code.

The hidden message was:

The Magic Words are Squeamish Ossifrage:

The task was accomplished using 5000 MIP-years. This was done over an eight-month period of time by dividing the problem into smaller pieces and shipping these pieces, using E-mail, to computers all over the world.

Keep in mind that it does not take 5000 MIP-years to break a key, especially when the key is poorly chosen. For example, a security hole was exposed in a WWW browser that used RSA to encrypt credit card numbers that were being sent over the Internet.

The problem was that the system used a highly predictable method (a combination of process ID plus time of day) to generate a random number that was, in turn, used to generate a private and public key. Such keys are easily broken. The example of RSA is just one of several public keys, among them Diffie Hellman.

Digital Signatures:

Digital signatures are a technique to authenticate a message or a document.

In legal documents, for ex­ample, a photocopy is often considered to be inadequate and a document signed on paper by a person is often accepted as genuine. Documents generated and sent by a computer needs to have some procedure whereby the signature is replaced by a “digital signature” on the document when the whole procedure is on the computer.

Therefore, if a document is generated on the computer and is sent to some who receives it:

1. The receiver should be able to verify the claimed identity of the sender.

2. The sender should not later be able to repudiate the contents of the document that has been sent as a message bearing his “digital signature“.

3. The receiver of the message should not be able to concoct the “digital signature” on the message, so that he/she may take advantage of a false document.

Take the example of a message to a bank. Suppose the sender sends a message to the manager to transfer a crore of rupees to a particular party, or instruct the bank to buy a ton of gold.

The bank’s computer should be able to confirm that the sender is who he says he is Also, the bank should be safe to transfer the money to the designated party or buy the ton of gold and the sender should not be able to deny later that he wanted the money to be transferred to a different party or that the sum he had specified was one lakh and not one crore.

Similarly, in case of the purchase of gold, the bank should not say that the bank was asked to buy 1 ton of silver and not gold. Digital signatures will be extremely useful in such cases.

Two such methods for digital signatures have been organised. One is for secret keys and the other is for public keys (we are, of course, not considering message digests). In the USA, the de facto industry standard is the RSA algorithm for public key.

However, the public key and the digital signature algo­rithm proposed by NIST (National Institute of Standards and Technology) proposed sing a variant of the El Gamal public key algorithm for their new Digital Signature Standard (DSS).

As usual, when the government tried to dictate cryptographic standards, there was an uproar. This was the position in the USA. However, the position in India was a little different. In September 2006, Information Technology Act was established.

The Information Technology Act provided the required legal sanctity to the dig­ital signatures based on asymmetric cryptosystems. Digital signatures, thereafter, accepted at par with hand-written signatures and the electronic documents that were digitally signed were to be treated at par with paper documents.

This was central to the growth of e-commerce and e-governance. The development of these activi­ties concern themselves with the issue of trust in the electronic environment. The future of e-commerce and e-governance depends on the trust that the transacting parties place in the security of transmission and the content of communication.

For these reasons, the Information Technology Act provided the re­quired legal sanctity to the digital signatures based on asymmetric cryptosystems.

The Act provided for the establishment of the Controller of Certifying Authority (CCA) to license and regulate the working Certifying Authorities (CAs) issue digital signature certificates for electronic authentication of users.

The CCA certifies the public keys of CAs using its own private keys, which enables users in the cy­berspace to verify that a given certificate is issued by a licensed CA. For this purpose, it operates the Root Certifying Authority of India (RCAI). The CCA also maintains the National Repository of Digital Certificates (NRDC) which contains all the certificates issued by all the CAs in the country.

The CAs that have been licensed in India are:

1. Safes crypt

2. NIC

3. IBRT

4. TCS

5. Customs & Central Excise

6. (n)Code Solutions CA

7. GNFC

These organisations have been given the authority in India to issue, record and verify digital signa­tures to organisations and individual subject to guidelines that have been given to them. It now remains for us to see an example of how digital signatures are generated. For this, we take the case of digital signatures by DSA (Digital Signature Algorithm).

DSA makes use of the following parameters:

1. P = a prime modulus, where 2 L-1< P < 2L for 512 ≤ L ≤ 1024 and L is a multiple of 64.

2. q = a prime divisor of P-1 where 2159 < q < 2160

3. g = h (p-1)/q mod p, where h is any integer with 1 < h < p — 1 (g has the order q mod p).

4. x — a randomly or pseudo randomly generated integer with 0 < x < q.

5. y = gx mod p.

6. k = a randomly or pseudo randomly generated integer with 0 < k < q.

The integer’s p, q and g can be public and be common to a group of users. A user’s private and public keys are x and y respectively. They are normally fixed for a period of time.

Parameters x and k are used for signature generation only and must be kept secret. Parameter k must be regenerated for each signature. Parameters p, q, x and k should be generated using approved security measures. Let us now look at signature generation.

The signature of a message M is the pair of numbers r and s computed according to the equations given below:

In the above K-1 is the multiplicative inverse of k mod q, i.e. (k-1 k) mod q = 1 and 0 < k-1< q. The value of SHA (M) is a 160-bit string output by the secure Hash Algorithm. For use in computing s, this string must be converted to an integer.

As an option, one may wish to check if r = 0 or s = 0. If either of them are 0, a new value for k must be generated and the signature should be recalculated (it is extremely unlikely that r = 0 or s = 0 if signatures are generated properly).

The signature is transmitted along with the message to the verifier.

Finally, let us look at the signature verification process.

Prior to verifying the signature in a signed message, p, q and g plus the sender’s public key and identity are made available to the verifier in an authenticate manner.

Let M’, r’ and s’ be the received versions of M, r and s respectively and let y be the public key of the signatory. The verifier first checks to see that 0 < r’ < q; either of the two conditions is violated, the signature is rejected. If these conditions are satisfied, then the verifier computes

If v = r’, then the signature is verified and the verifier can have high confidence that the received message was sent by the party holding the secret key x corresponding to y.

If v does not equal r’, then the message may have been modified, incorrectly signed by the signatory, or the message may have been signed by an impostor. The message should be considered invalid.

The example given above is to indicate how Digital Signatures can be developed and allotted. The example by no means specifies how it is being done by the parties in India mentioned above.

Management of Public Keys:

There are three basic elements in any encryption system. These are:

1. A means of changing information into code, that is, the algorithm,

2. A secret-starting point for the algorithm, that is, the key, and

3. A system to control the key, that is, key management.

The key determines how the encryption process will be applied to a particular message, and match­ing keys must be used to encrypt and decrypt messages. The algorithm used in an encryption system normally remains the same for the life of the equipment; so it is necessary to change keys frequently in order that identical encryption is not applied to messages for a long period.

It is generally desirable to change the keys on an irregular, but managed basis. Key management deals with the generation, storage, distribution, selection, destruction and archiving of the key variables. Two basic types of encryption in use today are known as secret key encryption and public key encryption.

In secret key encryption, the same key is used for both encryption and decryption. The key must be kept secret so that unauthorized parties cannot, even with the knowledge of the algorithm, complete the decryption process. A person trying to share encrypted information with another person has to solve the problem of communicating the encryption key without compromising it.

This is normally achieved by programming keys into all encryptors prior to deployment, and the keys should be stored securely within the devices. In a relatively small number of encryptors, the task of key management (including key changes) is easily handled for a private key system. Secret key encryption is a commonly used method of key management and is used for standard algorithms.

Public key encryption solves the problem of maintaining key security by having separate keys for encryption and decryption, which uniquely match each other, but are not predictable from each other. The user retains a private decryption key and makes the public key available for use by anyone interested in sending the user sensitive information.

The relationship between the two keys is such that given the public key a person cannot easily derive the private key. Senders use the recipient’s public key to send encrypted messages. Recipients use their corresponding private keys to decrypt the messages.

The private key can also be used to encrypt messages which can be decrypted by anyone with knowledge of the public key (the purpose of this is to provide verification of the origin rather than to achieve secrecy). Public key encryption is relatively inefficient and is not suitable for either encrypting large volumes or operating at high speeds. The RSA algorithm is a well-known form of public encryption.

Web Security:

Web security is merely necessary to point out here that the world-wide web is a very big area and it permits a great deal of freedom to its users. We will attempt to point out the pitfalls that can exist in using the World-Wide Web. There are two different types of parties using the web—each differently.

The first is the developer who is writing into the web and then there is the user who reads from the web. As far as the developer is concerned, he wishes to make sure that no one other than him (or her) can alter what has already been written, particularly with malicious intention.

While the developer does have a specific password, there may be other ways to get control of the website as was done, for example, in the case of the Indian Army’s website. For this, those who control and administer the allocation of websites will have to tighten up their security.

Unfortunately, freedom of the user also cannot be compromised, leading to a difficult situation. These issues are being discussed and hopefully one day we will come up with a suitable solution against hacking of websites. Till then, the user himself/herself must remain careful.

Aspect # 5. Virtual Private Networks (VPNs):

Virtual Private networks or VPNs are the current buzzword amongst network owners and administrators. With the use of the Internet increasing rapidly, many users and organisations tend to use their network to connect themselves with the Internet.

Since their internal network may have sensitive files that are meant for internal use only, there are increasing concerns about the security of such files, the moment the internal network becomes part of the Internet, even if that is for a small period. To try to overcome this problem, VPN has been introduced.

This is done by creating a secure pipeline between the network and the appropriate Internet server. Without going into the technological issues involved, one may simply consider that depending upon the value of the secret and sensitive, a VPN may succeed in keeping the network files secure.

If the value of the secret data, to an interloper, is less than the cost of the effort of accessing the file and extracting the data, the chances are very little that the data in the secret file will be accessed and read. Until VPN becomes proven from long-term use, it is best not to keep such data on an internal network, while at the same time connecting this internal network to the Internet.

This issue is further discussed in the following paragraph:

Finally, to come back to the problems associated with passwords, encryption and excessive security measures. The desire of secrecy is often so strong that the basic issue is sometimes forgotten. This can lead to a situation whereby the cost of keeping a file secret may be much more than the value of the data. Secondly, it is sometimes forgotten that it is impossible to make a lock whose key cannot be duplicated.

Therefore, for cases where the user feels that the value of the information is very high, it is best that the data should not be put on the network at all. That is the only way to keep highly secret data safe.

While discussing firewalls, I mentioned that they are used so that “no unwanted visitor could enter the fortress easily”. The accent on the word “easily” is as justified for protecting data today as it was for protecting a fortress in medieval days.

Home››Networking››