Category: Technology

All You Need to Know About IPv4

All You Need to Know About IPv4

According to abbreviationfinder, Ipv4 stands for Internet Protocol version 4 which is the fourth version of the Internet Protocol (IP), and the first to be implemented on a large scale. Defined in RFC 791.


The TCP/IP protocol is the protocol used to manage data traffic on the network. This protocol is actually made up of two different protocols that perform different actions.

On the one hand, there is the TCP protocol, which is in charge of data transfer control, and on the other, there is the IP protocol, which is in charge of identifying the machine on the network.

Use of IPv4

IPv4 uses 32 -bit addresses, limiting it to 232 = 4,294,967,296 unique addresses, many of which are dedicated to local area networks (LANs). Due to the enormous growth that the Internet has had (much more than I expected, when IPv4 was designed), combined with the fact that there is a waste of addresses in many cases, it was already several years ago that IPv4 addresses were scarce.

This limitation helped spur the push towards IPv6, which is currently in the early stages of deployment and is expected to eventually replace IPv4.

The addresses available in the IANA global pool belonging to the IPv4 protocol were officially exhausted on Thursday, February 3, 20111 The Regional Internet Registries must, from now on, handle their own pools, which are estimated to last until September 2011

Currently there are no IPv4 addresses available for purchase, therefore there is a forced and priority obligation to migrate to IPv6, operating systems Windows Vista, 7, Unix/like (Gnu/linux, Unix, Mac OSX), BSD among others, they have native support for IPv6, while Windows XP requires using the prompt and typing ipv6 install to install it, and older systems do not have support for it.

Waste of addresses

The waste of IPv4 addresses is due to several factors.

One of the main ones is that initially the enormous growth that the Internet was going to have was not considered; large address blocks (of 16.271 million addresses) were assigned to countries, and even to companies.

Another reason for waste is that in most networks, except the smallest, it is convenient to divide the network into subnets. Within each subnet, the first and last addresses are not usable; however, not all remaining addresses are always used. For example, if you want to accommodate 80 hosts in a subnet, you need a 128-address subnet (round to the next power of 2); in this example, the remaining 48 addresses are no longer used.

Transition to IPv6

Actually we are all involved. From end users to developers of software and operating systems, network and communications hardware, and in general, all kinds of entities.

The Internet is a network that does not have an “executive management” as such, there is no single command, but rather we are all the Network.

In the organization that deals with the standardization of Internet protocols, the IETF (Internet Engineering Task Force) we have fulfilled the mission, and about 15 years ago we began work to develop the new IPv6 protocol, and since approximately 2002 we can say that the base of the protocol, compared to IPv4, is fully finished and tested. ISOC (Internet Society), to which the IETF administratively depends, has also supported the development and deployment of IPv6.


All You Need to Know About CD (Compact Disc)

All You Need to Know About CD (Compact Disc)

compact disc (popularly known as CD or cedé, for the acronym in English compact disc according to abbreviationfinder) is an optical digital medium used to store any type of information (audio, images, video, documents and other data).

Today, it remains the preferred physical medium for audio distribution.

CD in Spanish language

In Spanish or Castilian you can already write «cedé» (how it is pronounced) because it has been accepted and lexicalized by use; In much of Latin America it is pronounced [sidí], as in English, but the Pan-Hispanic Dictionary of Doubts (of the Association of Academies of the Spanish Language) advises against this pronunciation. The anti- etymological «cederrón» (which comes from the acronym CD-ROM: compact disc-read only memory: ‘ compact disc-read-only memory ‘) is also accepted.


The compact disc was created in 1979 by the Japanese Toshitada Doi (from the Sony company) and the Dutch Kees Immink (from the Philipscompany). The following year, Sony and Philips, which had developed the Compact Disc digital audio system, began distributing compact discs, but sales were unsuccessful due to the economic slump at the time. So they decided to embrace the higher quality classical music market. The launch of the new and revolutionary audio recording format began, which would later be extended to other sectors of data recording.

The optical system was developed by Philips while the Digital Reading and Coding was carried out by Sony, it was presented in June 1980 to the industry and 40 companies from all over the world joined the new product by obtaining the corresponding licenses for the production of players and records. In 1981, the conductor Herbert von Karajan, convinced of the value of compact discs, promoted them during the Salzburg festival (Austria) and from that moment their success began. The first titles to be recorded on compact discs in Europe were the Alpine Symphony (by Richard Strauss), the waltzes by Frederic Chopin (performed by Chilean pianist Claudio Arrau) and the album The Visitors (by Swedish pop group ABBA).

In 1983, the American company CBS (which today belongs to the Japanese Sony Music) released the first compact disc in the United States: an album by pop singer Billy Joel.

The production of compact discs was centralized for several years in the United States and Germany from where they were distributed throughout the world. In the nineties, factories were set up in various countries as an example, in 1992 Sonopress produced the first CD in Mexico, entitled De mil colores (by Daniela Romo). In 1984 compact discs were opened to the world of computing, allowing storage of up to 700 MB.

The diameter of the central perforation of the compact discs was determined in 15 mm, when between meals, the creators were inspired by the diameter of the 10 cent guilder from Holland. In contrast, the diameter of compact discs is 12 cm, which corresponds to the width of the top pockets of men’s shirts, because according to Sony’s philosophy, everything had to fit there.

Recordable media

The three recordable media that rely on optical reading are: CD-ROM, CD-R, and CD-RW. CD ROM and CD-R can only be burned once. CD-WRs allow multiple reading and recording.

Physical details

Although there may be variations in the composition of the materials used to make the discs, they all follow the same pattern: compact discs are made from a 1.2-millimeter thick disc of plastic polycarbonate, which a reflective layer of aluminum is added, used to obtain more longevity of the data, which will reflect the laser light (in the infrared spectrum range and therefore not visually appreciable); subsequently a protective layer of lacquer is added, which acts as a protector for the aluminum and, optionally, a label on the upper part.

Common printing methods on CDs are screen printing and offset printing. In the case of CD-R and CD-RW, gold, silver and their alloys are used, which due to their ductility allows lasers to record on it, something that could not be done on aluminum with low-power lasers.


The CD-Roms They are made up of a spiral track that has the same number of bits per centimeter in all its sections (constant linear density), to make better use of the storage medium, and not waste space as happens in magnetic disks. This is why when reading and writing a CD, as the laser beam moves away from the center of the disc, the speed must decrease, since in the center the spiral is shorter than in the edges. By alternating the speeds, it is achieved that the amount of bits read per second is constant in any section, be it in the center or at the edges. If this speed were constant, fewer bits per second would be read if the zone is closer to the center, and more if it is closer to the edges. All of this means that a CD rotates at a variable angular speed.

In order to achieve that the CDs have the same density in any section of the spiral, in the recording, the laser beam emitted by the head (which moves in a radial straight line from the center to the edge of the platter) generates the spiral at constant linear speed (CLV), this means that the number of recorded bits per second will be constant. But in order to achieve this, and to maintain a constant linear density and spiral track, it will be necessary for the CD to rotate at a variable angular speed (explained above). Therefore, by spinning a CD at a variable angular speed, and being written at a constant linear speed, the same amount of bits are written and read per second and per centimeter, whatever the position of the CD. Whereas each turn of the spiral will contain more or less bits depending on whether it is closer to the center or to the edge.

Standard CDs are available in different sizes and capacities, so we have the following variety of discs:

  • 120mm (diameter) with 74-80 minutes of audio time and 650–700MB of data capacity.
  • 120mm (diameter) with a duration of 90–100 minutes of audio and 800-875MB of data (not on the market today).
  • 80mm (diameter), which were initially designed for CD singles. These can store about 21 minutes of music or 210 MB of data. They are also known as “mini-CDs” or “pocket CDs.”

A standard CD-ROM can hold 650 or 700 (sometimes 800) MB of data. The CD-ROM is popular for the distribution of software, especially multimedia applications, and large databases. A CD weighs less than 30 grams. To put CD-ROM memory in context, an average novel contains 60,000 words. Assuming that an average word has 10 letters – in fact it is considerably less than 10 letters – and each letter occupies one byte, a novel would therefore occupy 600,000 bytes (600 kB).

A CD can therefore contain more than 1000 novels. If each novel occupies at least one centimeter on a shelf, then a CD can hold the equivalent of more than 10 meters on the shelf. However, the textual data can be compressed ten times more, using compression algorithms, therefore a CD-ROM can store the equivalent of more than 100 meters of shelf.


Once the problem of storing the data has been solved, it remains to interpret it correctly. To do this, the companies that created the compact disc defined a series of standards, each of which reflected a different level. Each document was bound in a different color, giving name to each one of the ” Rainbow Books”.

CD (Compact Disc)

All You Need to Know About RSA

All You Need to Know About RSA

RSA. It was created in 1978 by Ron Rivest, Adi Shamir, and Len Adleman of the Massachusetts Institute of Technology (short for MIT according to abbreviationfinder); the letters RSA are the initials of their surnames, and it is the best known and most used asymmetric cryptographic system. These gentlemen were based on the Diffie-Hellman article on public key systems.


The RSA public key algorithm was created in 1978 by Ron Rivest, Adi Shamir, and Len Adleman of the Massachusetts Institute of Technology(MIT); the letters RSA are the initials of their surnames, and it is the best known and most used asymmetric cryptographic system. These gentlemen were based on the Diffie-Hellman article on public key systems.

These keys are calculated secretly on the computer where the private key is to be stored, and once it is generated, it should be protected using a symmetric cryptographic algorithm.


Regarding key lengths, the RSA system allows variable lengths, being currently advisable to use keys of no less than 1024 bits (keys of up to 512 bits have been broken, although it took more than 5 months and almost 300 computers working together to do it).


RSA bases its security on being a computationally safe function, since although performing modular exponentiation is easy, its inverse operation, the extraction of roots of modulus Ø is not feasible unless the factorization of e is known, the system’s private key.. RSA is the best known and most widely used of the public key systems, and also the fastest of them.


It has all the advantages of asymmetric systems, including the digital signature, although the use of symmetric systems is more useful when implementing confidentiality, as they are faster. It is also often used in mixed systems to encrypt and send the symmetric key that will be used later in encrypted communication.

RSA algorithm

The algorithm consists of three steps: key generation, encryption, and decryption.

Padding schemes

RSA must be combined with some version of the padding scheme, otherwise the value of M can lead to insecure ciphertext. RSA used without padding scheme could suffer many problems.

  • The value m = 0 or m = 1 always produces the same ciphertext for 0 or 1 respectively, due to properties of the exponents.
  • When we code with small exponents (e = 3) and small values ​​of m, the result of m could be strictly less than the modulus of n. In this case, the ciphertext could be easily decrypted, taking the e-th root of the ciphertext regardless of the module.
  • Since RSA encryption is a deterministic algorithm (it has no random components) an attacker can successfully launch a chosen text attack against the cryptosystem, building a dictionary of probable texts with the public key, and storing the encrypted result. By observing the encrypted texts in a communication channel, the attacker can use this dictionary to decrypt the content of the message.

In practice, the first of the two problems could arise when we send small ASCII messages where m is the concatenation of one or more ASCII encoded character / s. A message consisting of a single ASCII NUL character (whose value is 0) would be encoded as m = 0, producing a ciphertext of 0 no matter what values ​​of e and N are used. Probably a single ASCII SOH (whose value is 1) would always produce a ciphertext of 1. For conventional systems using small values ​​of e, such as 3, a single ASCII character message encoded using this scheme would be insecure, since the maximum value of m would be 255, and 255³ is less than any reasonable modulus. In this way the clear texts could be recovered simply by taking the cube root of the ciphertext. To EVITED these problems, the practical implementation of RSA is helped by some structures, use of randomized padding within the value of m before encryption. This technique ensures that m will not fall into the range of insecure clear texts, and that given a message, once it is filled in, it will encrypt one of the large numbers of the possible encrypted texts. The last feature is the increase in the dictionary, making it intractable when carrying out an attack.

The RSA-padding scheme must be carefully designed to prevent sophisticated attacks which could be facilitated by the predictability of the message structure. Examples of fill scheme used with RSA:

  • RSA-OAEP (Optimal Asymetric Encryption Padding) or its modified version RSA-OAEP +. This type of padding is used for example in PKCS # 1 and in the TOR anonymity network
  • RSA-SAEP + (Simplified Asymmetric Encryption Padding)
  • RSA-PSS (Probabilistic Signature Scheme). Used for example in PKCS # 1

Message authentication

RSA can also be used to authenticate a message. Suppose Alice wants to send an authenticated message to Bob. She produces a hash value of the message, raises it to the power of d≡ mod n (as she does when decrypting messages), and attaches it to the message as a “signature.” When Bob receives the authenticated message, he uses the same hashing algorithm in conjunction with Alice’s public key. Raises the received signature to the power of e≡ mod n (as it does when encrypting messages), and compares the hash result obtained with the hash value of the message. If the two match, he knows that the author of the message was in possession of Alicia’s secret key, and that the message has not been tampered with (it has not suffered attacks).

It should be noted that the security of padding-schemes such as RSA-PSS are essential for both signature security and message encryption, and that the same key should never be used for encryption and authentication purposes.


All You Need to Know About LCD

All You Need to Know About LCD

A display LCD (liquid crystal display: ‘ LCD ‘ for short English according to abbreviationfinder) is a thin, flat screen formed by a number of pixels color or monochrome placed in front of a light source or reflector. It is often used in battery-operated electronic devices as it uses very small amounts of electrical energy.

It is present in countless devices, from the limited image that a pocket calculator shows to televisions of 50 inches or more. These screens are composed of thousands of tiny liquid crystals, which are not solids or liquids in reality but an intermediate state.


Each pixel on an LCD typically consists of a layer of molecules aligned between two transparent electrodes, and two polarization filters, the axesof each being (in most cases) perpendicular to each other. Without liquid crystal between the polarizing filter, the light passing through the first filter would be blocked by the second (crossing) polarizer.

The surface of the electrodes that are in contact with the liquid crystal materials is treated in order to adjust the liquid crystal molecules in a particular direction. This treatment is usually normally applicable consists of a thin layer of polymer that is rubbed unidirectionally using, for example, a cloth. The direction of the liquid crystal alignment is defined by the rubbing direction.

Before the application of an electric field, the orientation of the liquid crystal molecules is determined by adaptation to the surfaces. In a twisted nematic device, TN (one of the most common devices among liquid crystal devices), the surface alignment directions of the two electrodes are perpendicular to each other, thus organizing the molecules in a helical structure., or twisted. Because the material is birefringent liquid crystal, the light that passes through one polarizing filter is turned by the liquid crystal helix that passes through the liquid crystal layer, allowing it to pass through the second polarized filter.. Half of the incident light is absorbed by the first polarizing filter, but otherwise the entire assembly is transparent.

When a voltage is applied across the electrodes, a turning force orients the liquid crystal molecules parallel to the electric field, which distorts the helical structure (this can be resisted thanks to elastic forces since the molecules are confined to the surfaces). This reduces the rotation of the polarization incident light, and the device appears gray. If the applied voltage is large enough, the liquid crystal molecules in the center of the layer are almost completely unwound and the polarization of the incident light is not rotated as it passes through the liquid crystal layer. This light will be mainly polarized perpendicular to the second filter, and therefore it will be blocked and the pixel will appear black. By controlling the applied voltage across the liquid crystal layer at each pixel, light can be allowed to pass through different amounts, constituting different shades of gray. The optical effect of a twisted nematic (TN) device in the voltage state is much less dependent on the thickness variations of the device than in the offset voltage state. Because of this, These devices are often used between crossed polarizers in such a way that they appear bright without voltage (the eye is much more sensitive to variations in the dark state than in the bright state). These devices can also work in parallel between polarizers, in which case light and dark are reversed states. The offset stress in the dark state of this configuration appears reddened due to small variations in thickness throughout the device. Both the liquid crystal material and the alignment layer material contain ionic compounds. If an electric field of a certain polarity is applied for a long period, this ionic material is attracted to the surface and the performance of the device is degraded. This is to be avoided,

When a device requires a large number of pixels, it is not feasible to drive each device directly, thus each pixel requires a separate number of electrodes. Instead, the screen is multiplexed. In a multiplexed display, the electrodes on the side of the display are grouped together with the cables (usually in columns), and each group has its own voltage source. On the other hand, the electrodes are also grouped (usually in rows), where each group gets a sink voltage. The groups have been designed so that each pixel has a unique and dedicated combination of sources and sinks. The electronic circuits or the software that controls them, activates the sinks in sequence and controls the sources of the pixels in each sink.


Important factors to consider when evaluating an LCD screen:


The horizontal and vertical dimensions are expressed in pixels. HD displays have a native resolution of 1366 x 768 pixels (720p) and the native resolution on Full HD is 1920 x 1080 pixels (1080p).

Stitch width

The distance between the centers of two adjacent pixels. The smaller the dot width, the less granularity the image will have. The point width can be the same vertically and horizontally, or different (less common).


The size of an LCD panel is measured along its diagonal, usually expressed in inches (colloquially called the active display area).

Response time

It is the time it takes for a pixel to change from one color to another

Matrix type

Active, passive and reactive.

Vision angle

It is the maximum angle at which a user can look at the LCD, it is when it is offset from its center, without losing image quality. The new screens come with a viewing angle of 178 degrees.

Color support

Number of supported colors. Colloquially known as color gamut.


The amount of light emitted from the screen; also known as luminosity


The ratio of the brightest to the darkest intensity.


The ratio of the width to the height (for example, 5: 4, 4: 3, 16: 9, and 16:10).

Ports of entry

For example DVI, VGA, LVDS or even S-Video and HDMI.

Brief history

  • 1887: Friedrich Reinitzer (1858-1927) discovered that cholesterol extracted from carrots is a liquid crystal (that is, he discovers the existence of two melting points and the generation of colors), and published his findings at a meeting of the Chemical Society Vienna on May 3, 1888 (F. Reinitzer: Zur Kenntniss de Cholesterins, Monatshefte für Chemie (Wien / Vienna) 9, 421-441 (1888)).
  • 1904: Otto Lehmann publishes his work Liquid Crystals.
  • 1911: Charles Mauguin describes the structure and properties of liquid crystals.
  • 1936: The Marconi Wireless Telegraph company patents the first practical application of the technology, the “liquid crystal light valve”.
  • 1960 to 1970: Pioneering work on liquid crystals was carried out in the 1960s by the Royal Radar Establishment, in the city of Malvern, UK. The RRE team supported ongoing work by George Gray and his team at the University of Hull, who eventually discovered cyanobiphenyl in liquid crystals (which had the correct stability and temperature properties for application in LCD displays).
  • 1962: The first major publication in English on the topic Molecular structure and properties of liquid crystals, by Dr. George W. Gray.
    Richard Williams from RCA found that there were some liquid crystals with interesting electro-optical characteristics and he realized the electro-optical effect by generating band patterns in a thin layer of liquid crystal material by applying a voltage. This effect is based on a hydrodynamic instability formed, what is now called ” Williams domains ” inside the liquid crystal.
  • 1964: In the fall of 1964 George H. Heilmeier, while working in the RCA laboratories on the effect discovered by Williams, he noticed the color switching induced by the readjustment of dichroic dyes in a homeotropically oriented liquid crystal. The practical problems with this new electro-optical effect caused Heilmeier to continue working on the effects of dispersion in liquid crystals and, finally, the realization of the first functioning liquid crystal display based on what he called the dynamic mode dispersion (DSM). Applying a voltage to a DSM device initially changes the transparent liquid crystal into a milky, cloudy, state layer. DSM devices could operate in transmission and reflection mode, but require considerable current flow to operate.
  • 1970: The 4 of December of 1970, the patent of the effect of the field “twisted nematic” liquid crystals was filed by Hoffmann-LaRoche in Switzerland (Swiss Patent No. 532,261), with Wolfgang Helfrich and Martin Schadt (who worked for the Central Research Laboratories) where they are listed as inventors. Hoffmann-La Roche, then licensed the invention, gave it to the Swiss manufacturer Brown, Boveri & Cie, which produced watch devices during the 1970’s, and also to the Japanese electronics industry that soon produced the first digital quartz wristwatch. with TN, LCD screens and many other products. James Fergason in Kent State University filed an identical patent in the US on April 22, 1971. In 1971 the Fergason company ILIXCO (now LXD Incorporated) produced the first LCD displays based on the TN effect, which soon replaced the poor quality DSM types due to improvements in lower operating voltages and lower power consumption. Energy.
  • 1972: The first liquid crystal active matrix display was produced in the United States by Peter T. Brody.

A detailed description of the origins and complex history of liquid crystal displays from an insider’s perspective from the earliest days has been published by Joseph A. Castellano in Liquid Gold, The Story of Liquid Crystal Displays and the Creation of an Industry

The same story seen from a different perspective has been described and published by Hiroshi Kawamoto (“The History of Liquid-Crystal Displays”, Proc. IEEE, Vol. 90, No. 4, April 2002), this document is available at public at the IEEE History Center.

Color in devices

In color LCD screens each individual pixel is divided into three cells, or sub-pixels, colored red, green, and blue, respectively, by increasing filters (pigment filters, tint filters, and metal oxide filters). Each sub-pixel can be independently controlled to produce thousands or millions of possible colors for each pixel. CRT monitors use the same ‘sub-pixel’ structure through the use of phosphor, although the analog electron beam used in CRTs does not give an exact number of sub-pixels.

Color components can be arranged in various pixel geometric shapes, depending on monitor usage. If the software knows what type of geometry is being used on a particular LCD, it can be used to increase the resolution of the monitor through sub-pixel display. This technique is especially useful for anti-aliasing text.

All You Need to Know About LCD

All You Need to Know About Hypertext Markup Language

All You Need to Know About Hypertext Markup Language

According to abbreviationfinder, Hypertext Markup Language is also known as HMTL, which uses markings to describe how text and graphics should appear in a web browser that, in turn, is prepared to read those markings and display the information in a standard format. Some web browsers include additional marks that can only be read and used by them, and not by other browsers. The use of non-standard marks in places of special importance is not recommended.


Html started as a simplification of something called SGML, Generalized Standard Markup Language, much more difficult to learn than HTML and generally used to display large amounts of data that must be published in different ways. The theory says that all marks are not just a formatting code for the text, but have their own meaning. Therefore, everything that can be used must be inside a mark with a meaning. To “read” an HTML page without a web browser, you will need to be familiar with some terms and concepts. The first of them is the source or source code, the way to name all the marks and the text that make up the HTML file. The font is what you will actually see when you use a text editor, and not a web browser, to view the HTML file.

A mark is the basic element of the code that formats the page and tells the browser how to display certain elements. Markups do not appear when Web Pages are displayed, but they are an essential part of HTML authoring. These symbols are essential, as they will indicate to the browser that it is an instruction, not text that should appear on the screen.

There are many brands that need what is called the end brand. Generally, it is the same brand, but with a backslash before its meaning. For example, the mark for the letter in Bold (Bold) is <B> and must be placed before the text on which it has to take effect, putting the closing mark </B> behind it. If it is not closed, there will never be parts of the document that are not displayed correctly or worse, the browser will crash due to incorrect syntax.

An attribute appears directly within a mark, between <> symbols. It modifies certain aspects of the brand and instructs the browser to display the information with additional special characteristics. Although the mark for using an image is <IMG>, they have a required attribute, SRC, that tells the browser where the graphic file can be found. It also has several optional attributes like HEIGHT, WIDTH, and ALIGN.

Most of the attributes are optional and allow you to bypass browser defaults and customize the appearance of certain elements.
Whenever a mark appears with a backslash, such as </B> or </HTML> it will be “closing” or ending the mark of that section. Not all marks have a closing mark (such as the image mark) and some of them are optional (such as <IP>).

An attribute usually has a value; expressed with the equal sign (=). If you use attributes with values, they are always put in quotes, unless they are numbers, which do not need them (although this is a good habit).

Browsers, compatibility

As we have said, the browser installed on the user’s computer is the one that interprets the HTML code of the page they visit, so it can sometimes happen that two users view the same page differently because they have different browsers installed or even different versions. from the same browser.

Today’s browsers claim to be compatible with the latest version of HTML. It is necessary to make extensions to the browsers so that they can be compatible with this latest version.

Two of the browsers that are continually making extensions are Internet Explorer and Netscape Navigator, which make extensions even before the standards are set, trying to include the new features included in the drafts.

Browsers have to be compatible with the latest HTML version in order to interpret as many tags as possible. If a browser does not recognize a tag, it ignores it and the effect that the tag intended is not reflected on the page.

To make the extensions of these browsers, new attributes are added to existing tags, or new tags are added.

As a result of these extensions, there will be pages whose code can be fully interpreted by all browsers, while others, by including new attributes or tags from the draft of the latest version of HTML, can only be fully interpreted in the most up-to-date browsers..
In the latter case, it may also happen that some tag on the page can only be interpreted by a specific browser, and another tag by a browser other than the previous one, so it would never be viewed in its entirety by any browser.

One of the challenges of web page designers is to make the pages more attractive using all the power of the HTML language but taking into account these compatibility problems so that the greatest number of Internet users see their pages as they have been designed.


An editor is a program that allows us to write documents. Today there are a large number of editors that allow you to create web pages without the need to write a single line of code [[HTML]. These editors have a visual environment and automatically generate the code for the pages. By being able to see at all times how the page will be in the browser, the creation of the pages is facilitated, and the use of menus allows to gain speed.

These visual editors can sometimes generate junk code, that is, code that is useless, on other occasions it may be more effective to correct the code directly, so it is necessary to know HTML to be able to debug the code of the pages.
Some of the visual editors with which you can create your web pages are Macromedia Dreamweaver, Microsoft Frontpage, Adobe Pagemill, NetObjects Fusion, CutePage, HotDog Proffesional, Netscape Composer and Arachnophilia, of which some have the advantage of being free.

In aulaClic you can find Macromedia Dreamweaver and Microsoft Frontpage courses, two of the most used editors today.
It is advisable to start using a tool that is as simple as possible, so that we have to insert the HTML code ourselves. This allows you to become familiar with the language, to be able to use a visual editor later, and to debug the code when necessary.

To create web pages by writing the HTML code directly, you can use any plain text editor such as Wordpad or Notepad in Windows, or the powerful Vim or Emacs editors in Unix and GNU / Linux environments mainly.


Tags or marks delimit each of the elements that make up an HTML document. There are two types of tags, the beginning of an element and the ending or closing of an element.

The start tag is delimited by the characters <and>. It is made up of the identifier or name of the tag, and it can contain a series of optional attributes that allow adding certain properties. Its syntax is: <identifier attribute1 attribute2…>
The attributes of the beginning tag follow a predefined syntax and can take any value of the user, or predefined HTML values.

The end tag is delimited by the characters </ and>. It is composed of the identifier or name of the tag, and does not contain attributes. Its syntax is: </identifier>

Each of the elements on the page will be found between a start tag and its corresponding end tag, with the exception of some elements that do not need an end tag. It is also possible to nest tags, that is, to insert tags between other beginning and ending tags.
It is important to nest the labels well, the labels cannot be ‘crossed’, in the example we start with the <p..> tag, before closing this tag we have put the <font..> so before closing the tag <p..> we must close the tag <font..> tag.

Hypertext Markup Language

All You Need to Know About CISC

All You Need to Know About CISC

According to abbreviationfinder, CISC stands for Complex Instruction Set Computer. In Spanish (Complex Instruction Set Computer). In it the processor brings hundreds of registers and many steps and clock cycles are needed to perform a single operation.

All PC-compatible x86 CPUs are CISC processors, but newer Macs or some with complex engineering drawings probably have a RISC (Reduced Instruction Set Computer) CPU.


The practical difference between CISC and RISC is that CISCx86 processors run on DOS, Windows 3.1, and Windows 95 in native mode; that is, without software translation that will degrade performance.

But CISC and RISC also reflect two rival computing philosophies. RISC processing requires short software instructions of the same length, which are easy to process quickly and in tandem by a CPU.

CISC technology, one of the oldest and most common, is not as efficient as RISC, but it is the most widespread since it was used from the beginning by Intel, the largest processor manufacturer in the world, like AMD, its competition. There are millions of programs written for CISC that do not run on RISC; the difference between both architectures has been shortened a lot with the increasing speed reached by the CISC processors, however a RISC processor half the speed of a CISC will work almost the same as the latter and in many cases much more efficiently.

Nowadays, the programs are increasingly large and complex, they demand greater speed in the processing of information, which implies the search for faster and more efficient microprocessors.

CPUs combine elements of both and are not easy to pigeonhole. For example, the Pentium Pro translates the long CISC instructions of the x86 architecture into simple fixed-length micro-operations that run on a RISC-style kernel. The UltraSparc-II of Sun, accelerates MPEG decoding with special instructions for graphics; These instructions obtain results that in other processors would require 48 instructions.

Therefore, in the short term, RISC CPUs and RISC-CISC hybrid microprocessors will coexist in the market, but with increasingly diffuse differences between the two technologies. In fact, future processors will fight on four fronts:

  • Execute more instructions per cycle.
  • Execute the instructions in a different order from the original so that interdependencies between successive operations do not affect processor performance.
  • Rename the registries to alleviate the scarcity of them.
  • Help accelerate overall system performance in addition to CPU speed.


Microprogramming is an important and essential feature of almost all CISC architecture. Such as: Intel 8086, 8088, 80286, 80386, 80486, Motorola 68000, 68010, 620, 8030, 684.

Microprogramming means that each machine instruction is interpreted by a microprogram located in memory on the processor’s integrated circuit. In the sixties, micro-production, due to its characteristics, was the most appropriate technique for the memory technologies that existed at that time, and it also made it possible to develop processors with upward compatibility. Consequently, the processors were endowed with powerful instruction sets.

Composite instructions are internally decoded and executed with a series of microinstructions stored in an internal ROM. This requires several clock cycles (at least one per microinstruction). The fundamental goal of the CISC architecture is to complete a task in as few assembly lines as possible. This is achieved by building processor hardware that is capable of understanding and executing a series of operations.

For this particular task, a CISC processor would come prepared with a specific instruction called MULT. When this instruction is executed, it loads the two values ​​into separate registers, multiplies the operands in the unit of execution, and then stores the product in the appropriate register. Thus, the entire task of multiplying two numbers can be completed with one instruction.

MULT 2: 3, 5: 2

MULT is what is known as “complex instruction.


It runs directly into computer memory banks and does not require the programmer to explicitly call any loading or storing functions. It closely resembles a command in a high-level language. For example, if we let “a” represent the value of 2: 3 and “b” represent the value of 5: 2, then this command is identical to the declaration of C “a = a * B.”

One of the primary benefits of this system is that the compiler has to do very little work to translate a high-level language statement to the assembly. Because the length of the code is relatively short, very little RAM is required to store instructions. The emphasis is placed on building complex instructions directly on the hardware.

CISC does not represent a processor architecture proposal in the usual sense. CISC reflects the way in which they were developed and the improvements that had been made to processor architectures until about 1975. CISC, is the Computer with a Complex Instruction Set (Complex Instruction Set Computer), represents the name of the main current developed in computer architecture and, perhaps, we could understand that it is the name that was assigned to the tendency to which the movement RISC was opposed.


All You Need to Know About Event Handler and WebAssembly

All You Need to Know About Event Handler and WebAssembly

What is an event handler?

With an event handler, a software developer can control exactly what should happen in the program when a certain event occurs. The triggering events can have different origins, but they are often triggered by the interaction of the user.

Events occur in a wide variety of places in software, e.g. For example, when a user clicks a button, moves the mouse pointer or writes something in a text field. Event handlers have the task of recognizing these events and then executing a predetermined action, e.g. B. to temporarily store the content of a text field as soon as the user changes it.

Event oriented software

Programs that do not always run linearly according to the same scheme and react to inputs and behavior of the user – or, more generally, to changes in state – work with events. This means that at some points in the program it is expected that a certain event could occur.

Accordingly, developers must provide a way in which the program code can deal with these events. For this purpose, it must first be monitored in the program code whether and when a predetermined event occurs. If it is determined that such an event has been triggered, the associated functionality or routine can then be executed.

Interface between code and events

Event handlers are most commonly used to create a connection between elements from the graphical user interface and the code in the background. So programmers have the possibility to react directly to the input of the user or to other events. In addition, working with event handlers allows programs that can react spontaneously and dynamically to events instead of actively waiting for a specific event and thus blocking resources.


  • The validation for a password field in a form is only triggered when the user has entered something in the associated text field and then left the field again.
  • Only when a user clicks the “Select Date” button should the program display a control for selecting the date.
  • While the user is already entering text in a text field, each new character is checked to determine whether the content exceeds the maximum number of characters allowed.
  • When a user enters a predetermined key or key combination, a predetermined event is to be triggered, e.g. B. the change to a certain view.

What is WebAssembly?

WebAssembly is intended as a supplement to Javascript in the web browser. It’s a byte code that is supposed to help increase the speed of web applications. However, the system still has certain weaknesses.

WebAssembly, or Wasm for short, has the task of eliminating some of the shortcomings of Javascript . This language is actually too messy and has too soft a syntax for the central role it has played in web development for quite some time. It is not sufficient for modern performance requirements.

On a small scale, the developers have with emscripten resorted. C programs are compiled and executed in a virtual machine of the browser. On the whole, however, the whole thing does not work because Javascript cannot offer the necessary performance.

WebAssembly is designed to solve exactly these problems. It is a bytecode that constitutes an alternate virtual machine that has sufficient performance.

WebAssembly simply explained

The easiest way to explain what Wasm does is with a metaphor: A hybrid vehicle has an electric motor and a petrol unit. The electric motor is the normal Javascript environment of the browser. The engine is completely sufficient for everyday tasks. However, he has performance limits. In such moments, the gasoline engine (the Wasm machine) must be switched on in order to eliminate this. The following things must be taken into account:

  • WebAssembly and the Javascript environment need to be able to communicate with each other to determine when the additional power is needed.
  • Wasm provides so-called “performance islands” and does not continuously provide additional services. In practice, the code compiled for WebAssembly is loaded and executed for this in the form of modules.
  • So that the modules can be generated, compilers are required that understand and can process the Wasm format.
  • Such compilers are available for C, C ++, Rust, Blazor and Clang, for example.
  • The compilers must ensure that handovers can take place in both directions. To stay in the picture: the petrol engine must be able to indicate when its power is no longer needed and not just receive commands.

The advantages of WebAssembly

All major browser developers (Microsoft, Google, Apple, Mozilla, Opera) are driving the development of WebAssembly as part of the “Open Web” platform. Since 2017, all popular browsers have supported the associated modules, with the exception of Internet Explorer, which has now been discontinued. The following advantages have been shown:

  • Web applications load faster.
  • The corresponding apps work more efficiently.
  • Wasm environments are safe because they run in a sandbox.
  • The bytecode is openly accessible and can be used accordingly for development and debugging.
  • In theory, it is even possible to use WebAssembly as a virtual machine outside of browsers. An interface called WASI should also make this possible in practice.

WebAssembly problems

One major problem WebAssembly struggles with is the difficulty of implementation . At the moment the Wasm modules can only be loaded in the browser by either manually programming a separate JavaScript loader or by using the “shell page” generated by Emscripts.

The latter, however, is difficult and sometimes impossible: Many of the commands in the standard library require a supporting infrastructure in runtime , which, however, is often not available. To take up the above metaphor again: The user currently still has to ensure that the electric motor explains to the gasoline engine what is required every time. Unfortunately, he does not understand some commands, which therefore have to be explained again and again (or saved using a new library that you can create yourself).

The developers definitely have a solution for this problem. script tags should help to load the modules in the future. So far, however, this approach has not had any noticeable success.

What is WebAssembly

All You Need to Know About Software Quality Assurance

All You Need to Know About Software Quality Assurance

What is Software Quality Assurance?

Quality assurance is important for software providers and their products, developers and clients. The procedures and standards specified by a QA, or Quality Assurance program, prevent product defects before they arise.

The term Quality Assurance (QS), the English name is Quality Assurance (QA), is fundamentally associated with systematic processes. The focus of the consideration is the question: Does a product to be created meet the requirements placed on it? Corresponding security practices have existed for generations.

Today ISO, the International Organization for Standardization, deals with the mapping and certification of quality assurance practices. In a formalized form, QS has its origins in the manufacturing industry, where quality assurance processes are described in the ISO 9000 family of standards. Corresponding security concepts now determine the processes in most industries and quite explicitly in software development.

Quality: assurance versus control

A basic definition of the term is used to distinguish between the terms “quality assurance” and “quality control”. There is no doubt that the two terms have functional similarities, but there are notable differences in content:

  • Quality control is a production-oriented process. This includes processes such as testing and troubleshooting.
  • Quality assurance refers to a systematic process to ensure a specified requirement profile. The point is to prevent faulty program code and to deliver error-free products in this way.

The software developer and QA models

Quality assurance is central to software developers. After all, it is a matter of avoiding errors in software products before errors manifest themselves. On the one hand, this approach reduces development time and, on the other hand, it has a cost-reducing effect. Strategies for ensuring software quality are correspondingly diverse .

A model geared towards performance optimization bears the abbreviation “CMMI” for Capability Maturity Model Integration. This is an evaluation scheme that is based on degrees of maturity, which are ranked with the aim of optimization. Other models, all of which have the goal of increasing work efficiency, have the following names:

Waterfall model : traditional, linear development approach. In the classic step-by-step approach, functional processes such as program design, code implementation , test phase, corrections and final approval are lined up.

Agile: team-oriented development methodology; the so-called ” sprint ” represents a step in the work process.

Scrum : A combination of a waterfall and agile model, in which development teams are formed for special tasks. In this model, each task is divided into several sprints.

Tools and methodology in quality assurance

How can the software developer ensure that a high quality end product is delivered to the client? Testing the software program is an elementary part of software quality assurance. Today the developer has numerous modern tools and work platforms at his disposal, which guarantee a high degree of automation of the test procedures. Proven tools that are frequently used in software houses are (listed in alphabetical order):

Jenkins: This open source quality assurance program offers the possibility of executing and testing code in real time. Thanks to its degree of automation, Jenkins is predestined for use in fast-moving program environments.

Selenium: another open source product. It enables tests to be carried out in numerous languages ​​such as “C #”, ” Java ” or ” Python “.

Postman: Tool specially designed for web applications. This test tool is available for the Windows, Mac and Linux operating systems. It supports editors and description formats such as Swagger and the RAML formatting.

Conclusion: quality assurance as a distinguishing feature

Quality Assurance (QA) differs from the actual test routine. Rather, it defines the standards to be specified for testing. Ultimately, this is done with the aim of ensuring the suitability of a software product for the requirement profiles specified in the order. Quality assurance teams make a valuable contribution to product quality – a differentiating factor in the competition that should not be underestimated.

Software Quality Assurance

All You Need to Know About K Desktop Environment

All You Need to Know About K Desktop Environment

K Desktop Environment, abbreviated as KDE by abbreviationfinder, has its own input / output system called KIO, which can access a local file, a network resource (through protocols such as HTTP, FTP, NFS, SMB, etc.), or virtual protocols (Camera of photos, compressed file, etc.) with absolute transparency, benefiting every KDE application. KIO’s modular architecture allows developers to add new protocols without requiring modifications to the base of the system.

Finally, (KParts) allows you to include applications within others, thus avoiding code redundancy throughout the system. Additionally, it has its own HTML engine called KHTML, which is being reused and expanded by Apple (to create its Safari browser), and by Nokia.

KDE Project

As the project history shows, the KDE team releases new versions in short periods of time. They’re renowned for sticking to release plans, and it’s rare for a release to be more than two weeks late.
One exception was KDE 3.1, which was delayed for over a month due to a number of security-related issues in the codebase. Keeping strict release plans on a voluntary project of this size is unusual.

Major releases

Major Releases
Date Launching
July 12, 1998 KDE 1.0
February 6, 1999 KDE 1.1
October 23, 2000 KDE 2.0
February 26, 2001 KDE 2.1
August 15, 2001 KDE 2.2
April 3, 2002 KDE 3.0
January 28, 2003 KDE 3.1
February 3, 2004 KDE 3.2
August 19, 2004 KDE 3.3
March 16, 2005 KDE 3.4
November 29, 2005 KDE 3.5
January 11, 2008 KDE 4.0
July 29, 2008 KDE 4.1
January 27, 2009 KDE 4.2
August 4, 2009 < KDE 4.3
February 9, 2010 KDE 4.4

A major release of KDE has two version numbers (for example KDE 1.1). Only the major releases of KDE incorporate new functionality. There have been 16 major releases so far: 1.0, 1.1, 2.0, 2.1, 2.2, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 4.0, 4.1, 4.2, 4.3, and 4.4. All releases with the same major version number (KDE 1, KDE 2, KDE 3, and KDE 4) are supported in both binary code and source code. This means, for example, that any software developed in KDE 4.2.X will work with all KDE 4 releases.

Except during major version changes, alterations never occur with requirements for recompiling or modifying the source code. This maintains a stable API (Application Programming Interface) for KDE application developers. The changes between KDE 1 and KDE 2 were large and numerous, while the API changes between KDE 2 and KDE 3 were comparatively minor. This means that applications can be easily transformed to the new architecture.

Major version changes to KDE are intended to follow those of the Qt Library, which is also under constant development. So, for example, KDE 3.1 requires Qt ≥ 3.1 and KDE 3.2 requires Qt ≥ 3.2. However, KDE 4.0 requires Qt ≥ 4.3 and KDE 4.1 requires Qt ≥ 4.4.

As soon as a major release is ready and announced, it is added to the svn repository “branch”, while work on the next major release begins on the main one (trunk). A major release takes several months to complete, and many bugs found during this stage are removed from the stable branch as well.

Minor releases

Less separate release dates are scheduled for minor releases. A minor KDE release has three version numbers (for example KDE 1.1.1) and the developers focus on fixing bugs and improving minor aspects of the programs instead of adding functionality.


  • KDE was criticized in its early days because the library on which it is developed (Qt), despite following a development based on open source, was not free. The 4 as September as 2000, the Library began distributing licensed under the GPL 1 and criticisms were gradually ceasing. Currently, and since version 4.5, the library is additionally available under LGPL 2.1.
  • Some outsiders criticize KDE’s similarity to the Windows Desktop Environment. This observation, however, falls on the selection of predefined parameters of the environment; often aimed at making it easier to use for new users, most of whom are used to working with Microsoft operating systems. However, KDE has a high configuration capacity and in its branch 4 it has desktop effects integrated in Plasma and KWin, comparable to those of Compiz.

Project Organization

Featured collaborators
Function Name Origin
Graphic designer
Everaldo Coelho Brazil
Nuno pinheiro Portugal
Programmer Aaron Seigo Canada
David faure
Duncan Mac-Vicar Prett chili
Dirk mueller
Eva brucherseifer
George staikos
Lars Knoll
Matthias ettrich

Like many other free projects, KDE is built primarily through the effort of volunteers. Since several hundred individuals contribute to KDE in various ways (programming, translating, producing art, etc.), organizing the project is complex. Most of the issues are discussed on the different project mailing lists.

Contrary to what you might think of such a large project, KDE does not have a centralized leadership; Matthias Ettrich, the founder of the KDE project, does not have much weight over the decisions and direction of the KDE project. Important decisions, such as release dates or inclusion of new applications, are made by the main developers on a restricted mailing list. Lead developers are those who have contributed to KDE for a long time. Decisions are not made in a formal voting process, but rather through discussion on the mailing lists. Generally this method works very well.

Any user is welcome to report bugs he has found in the Software (“bug”). It is also possible to make requests about new functionalities (“wish”). It is enough to communicate it, in English, on the website enabled for it: KDE Bug Tracking Syst en. In legal and financial matters, the KDE Project is represented by the KDE eV, a German non-profit organization.

KDE - K Desktop Environment

All You Need to Know About SSL

All You Need to Know About SSL

SSL mail server. (from the English Secure Socket Layer listed on abbreviationfinder). It provides security services by encrypting the data exchanged between the server and the client with an algorithm symmetric encryption, typically the RC4 or IDEA, and encrypting the session key RC4 or IDEA using an encryption algorithm public key, typically the RSA.


Secure Socket Layer is a set of protocols general character designed in 1994 by the company Nestcape communcations Corporation, and is based on the combined application of Symmetric Cryptography, Cryptography Asymmetric (public key), digital certificates and digital signatures for a channel or secure means of communication through the Internet.

Symmetric cryptographic systems, the main engine of encryption of data transferred in communication, take advantage of the speed of operation, while asymmetric systems are used for the secure exchange of symmetric keys, thereby solving the problem of Confidentiality. in data transmission.


When you connect to a secure server (https: // www…), browsers notify you of this circumstance by means of a yellow padlock in the lower part and also allow you to check the information contained in the digital certificate that enables it as a secure server. SSL allows you to collect data such as credit card information, etc. in a secure environment since the information sent through a secure form is transmitted to the server in an encrypted form.

SSL implements a negotiation protocol to establish a secure communication at the level of socked (machine name plus port), in a transparent way to the user and the applications that use it.

When the client asks the secure server for secure communication, the server opens an encrypted port, managed by software called the SSL Record Protocol, located on top of TCP. It will be the high-level software, SSL Handshake Protocol, who uses the SSL Record Protocol and the open port to communicate securely with the client.

The SSL Handshake Protocol

During the SSL Handshake protocol, the client and the server exchange a series of messages to negotiate security enhancements. This protocol follows the following six phases (very briefly):

  • The Hello phase, used to agree on the set of algorithms for maintaining privacy and for authentication.
  • The key exchange phase, in which you exchange information about the keys, so that in the end both parties share a master key.
  • The session key production phase, which will be used to encrypt the exchanged data.
  • The Server Verification phase, present only when RSA is used as the key exchange algorithm, and serves for the client to authenticate the server.
  • The Client Authentication phase, in which the server requests an X.509 certificate from the client (if client authentication is required).
  • Finally, the end phase, which indicates that the secure session can now begin.

The SSL Record Protocol

The SSL Record Protocol specifies how to encapsulate transmitted and received data. The data portion of the protocol has three components:

  • MAC-DATA, the authentication code of the message.
  • ACTUAL-DATA, the application data to be transmitted.
  • PADDING-DATA, the data required to fill the message when using block encryption.

General characteristics

  • SSL implements a negotiation protocol to establish a secure communication at the level of socked (machine name plus port), in a transparent way to the user and the applications that use it.
  • The identity of the secure web server (and sometimes also of the client user) is obtained through the corresponding Digital Certificate, whose validity is checked before starting the exchange of sensitive data (Authentication), while the security of data integrity exchanged, the Digital Signature is in charge by means of hash functions and the verification of summaries of all the data sent and received.
  • SSL provides security services to the protocol stack, encrypting outgoing data from the Application layer before it is segmented at the Transport layer and encapsulated and sent by the lower layers. Furthermore, it can also apply compression algorithms to the data to be sent and fragment the blocks larger than 214 bytes, reassembling them at the receiver.
  • The identity of the secure web server (and sometimes also of the client user) is obtained through the corresponding Digital Certificate, whose validity is checked before starting the exchange of sensitive data (Authentication), while the security of data integrity exchanged, the Digital Signature is in charge by means of hash functions and the verification of summaries of all the data sent and received.
  • Its implementation in the OSI and TCP / IP reference models, SSL is introduced as a kind of additional layer or layer, located between the Application layer and the Transport layer, replacing the operating system sockets, which makes it independent. application that uses it, and is generally implemented on port 443. (NOTE: The ports are the interfaces between the applications and the TCP / IP protocol stack of the operating system).


The most current version of SSL is 3.0. It uses the symmetric DES, TRIPLE DES, RC2, RC4 and IDEA encryption algorithms, the asymmetric RSA, the MD5 hash function and the SHA-1 signature algorithm.


All You Need to Know About LDAP

All You Need to Know About LDAP

Short for LDAP according to abbreviationfinder, Lightweight Directory Access Protocol is an application-level protocol that allows access to an ordered and distributed directory service to search for various information in a network environment. LDAP is also considered a Database that can be queried. It is based on the X.500 standard. Usually, it stores authentication information (user and password) and is used to authenticate, although it is possible to store other information (user contact data, location of various network resources, permissions, certificates, among other data). LDAP is a unified access protocol to a set of information on a network.


Telecommunications companies introduced the concept of directory services to Information Technology and Computer Networks, thus their understanding of directory requirements was well developed after 70 years of producing and managing telephone directories. The culmination of this effort was the X.500 specification, a set of protocols produced by the International Telecommunication Union (ITU) in the 1980s.

X.500 directory services were traditionally accessed via DAP (Directory Access Protocol), which required the OSI (Open Systems Interconnection) protocol stack. LDAP was originally intended to be a lightweight, alternative protocol for accessing X.500 directory services over the simpler (and now more widespread) TCP / IP protocol stack. This directory access model was imitated from the DIXIE Directory Assistance Serviceprotocols.

Standalone LDAP directory servers were soon implemented, as well as directory servers that supported DAP and LDAP. The latter became popular in enterprises because it eliminated any need to deploy an OSI network. Now, the X.500 directory protocols including DAP can be used directly over TCP / IP.

The protocol was originally created by Tim Howes (University of Michigan), Steve Kille (Isode Limited), and Wengyik Yeong (Performance Systems International) around 1993. A more complete development has been done by the Internet Engineering Task Force.

In the early stages of LDAP engineering, it was known as the Lightweight Directory Browsing Protocol, or LDBP. It was later renamed as the scope of the protocol had been expanded to include not only directory browsing and search functions, but also directory update functions.

LDAP has influenced later Internet protocols, including later versions of X.500, XML Enabled Directory (XED), Directory Service Markup Language (DSML), Service Provisioning Markup Language (SPML), and Service Location Protocol (SLP).

Characteristics of an LDAP directory

  • It is very fast in reading logs.
  • It allows to replicate the server in a very simple and economical way. Many * applications of all kinds have connection interfaces to LDAP and can be easily integrated.
  • Use a hierarchical information storage system.
  • It allows multiple independent directories.
  • It works over TCP / IP and SSL (Secure Socket Layer).
  • Most applications have LDAP support.
  • Most LDAP servers are easy to install, maintain, and optimize.
  • Given the characteristics of an LDAP, its most common uses are:
  • Information directories.
  • Centralized authentication / authorization systems.
  • Email Systems.
  • Public certificate servers and security keys.
  • Single authentication or SSO for application customization.
  • Centralized user profiles.
  • Shared address books.

The Advantages of LDAP Directories

Now that we have “straightened out”, what are the advantages of LDAP directories? The current popularity of LDAP is the culmination of a number of factors. I’ll give you a few basic reasons, as long as you keep in mind that this is only part of the story.

Perhaps the greatest advantage of LDAP is that your company can access the LDAP directory from almost any computing platform, from any of the growing number of applications readily available for LDAP. It’s also easy to customize your internal company applications to add LDAP support.

The LDAP protocol is cross-platform and standards-based, so applications do not need to worry about the type of server the directory is hosted on. In fact, LDAP is finding much wider acceptance because of its status as an Internet standard. Vendors are more keen to code for LDAP integration into their products because they don’t have to worry about what’s on the other side. Your LDAP server can be any of a number of commercial or open source LDAP directory servers (or even a DBMS server with an LDAP interface), since interacting with any true LDAP server carries the same protocol, client connection packet, and query commands. By contrast,

Unlike relational databases, you don’t have to pay for each client software connection or license.

Most LDAP servers are simple to install, easily maintainable, and easily tuneable.

LDAP servers can replicate some of your data as well as all of it through sending or receiving methods, allowing you to send data to remote offices, increase your security, and more. Replication technology is built in and easy to configure. By contrast, many of the DBMS vendors charge extra for this feature, and it is considerably more difficult to manage.

LDAP allows you to safely delegate authorization-based reading and modification according to your needs using ACIs (collectively, an ACL, or Access Control List). For example, your group of facilities can give access to change the location of employees, their cubicle, or office number, but it is not allowed to modify entries in any other field. ACIs can control access depending on who is requesting the data, what data is being requested, where the data is stored, and other aspects of the record that is being modified. All of this done directly through the LDAP directory, so you don’t need to worry about doing security checks at the user application level.

LDAP is particularly useful for storing information that you want to read from many locations, but that is not frequently updated. For example, your company could store all of the following data in an LDAP directory:

  • The company’s employee phone book and organizational chart
  • External customer contact information
  • Information on the service infrastructure, including NIS maps, email aliases, and more
  • Configuration information for distributed software packages
  • Public certificates and security keys

Using LDAP to store data

Most LDAP servers are heavily optimized for read intensive operations. Because of this, one can typically see a different order of magnitude when reading data from an LDAP directory versus getting the same data from an OLTP-optimized relational database. However, because of this optimization, most LDAP directories do not do well with storing data where changes are frequent. For example, an LDAP directory server is good for storing your company’s internal phone book, but don’t even think about using it as a database repository for a high-volume e-commerce site.

If the answer to each of the following questions is Yes, then storing your data in LDAP is a good idea.

  • Would you like your data to be available through various platforms?
  • Do you need access to this data from a number of computers or applications?
  • The individual records that you are stored change a few times a day or less, as measured?
  • Does it make sense to store this type of data in a flat database instead of a relational database? That is, can you store all the data, for a given item, effectively in a single record?

This final question often makes people pause, because it is very common to access a plain register to obtain data that is relational in nature. For example, a record for a company employee might include the login name of that employee’s manager. It is good to use LDAP to store this type of information. The cotton test: If you can imagine all your data stored on an electronic Rodolex, then you can easily store your data in an LDAP directory.