Category: Acronyms

All You Need to Know About ANSOC

All You Need to Know About ANSOC

According to abbreviationfinder, ANSOC stands for Cuban National Association of the Deaf. It is a non-governmental social organization with its own legal and economic personality, national and permanent, based in Cuba.


It was founded on January 3, 1978 and its main objectives are: to represent the Cuban deaf community; achieve greater integration into society; coordinate and channel to state agencies and social institutions concerns, desires, interests, difficulties and needs of the members.

Main objectives

  • Fight for the integration of its members into Cuban society with equal rights and duties, conditions and opportunities.
  • Promote interest in study, work, love of country, and thus achieve the proper use of their free time, increasing their participation in social, cultural, sports and recreational activities.
  • Raise and strengthen the cultural level, guiding them in their duties and rights as participants in the economic and social development of the country.
  • Work for the elimination of barriers in social communication that prevent or limit the normal life of the hearing impaired.
  • Contribute with educational actions on the family, in order to achieve a positive influence on their preparation for life.
  • Promote campaigns for the early detection of disabilities, the abuse of toxic drugs, the use and care of hearing aids and others that promote health education.
  • Facilitate bilingual education for the deaf community based on current educational terms.
  • Coordinate and obtain the support of the mass media, mainly television and the written press, to inform its members and the rest of the population of its objectives, purposes, activities and achievements.
  • Provide solutions to labor problems taking into account the opportunities of the employment program for disabled people (PROEMDIS).
  • Mobilize all members to fulfill the tasks for the development of Cuban society.

Achievements obtained

  • Increase in the number of places for interpreters.
  • Respect for the sign language of the deaf and introduction of it in the pedagogical context.
  • Creation of the national communication commission.
  • The preparation of the deaf as pedagogical assistants and instructors of the LSC.
  • Participation and significant results in sport.
  • Formation of professional cultural groups.
  • Celebration of the First ANSOC Congress.
  • Participation of Cuba as a member of the FMS.

Due to the support provided by the association, currently deaf people have been able to overcome some of the barriers that society imposes on them. It is the ANSOC that guarantees and provides a certain degree of independence and value through the defense of their personal identity given this by the culture that characterizes their community.


All You Need to Know About NCIS

All You Need to Know About NCIS

According to abbreviationfinder, NCIS stands for Naval Criminal Investigative Service. It is an American series of the year 2003, which deals with the subject of investigations in the US Marine Corps.


Special Agent Leroy Jethro Gibbs is the leader of a team of Special Agents belonging to the NCIS (Naval Criminal Investigative Service) Major Response Team for Navy-Related Cases. Gibbs, a former Marine sniper, is a tough investigator and highly skilled interrogator who relies on instinct as much as evidence. Gibbs’s second in command is Investigative Agent Tony DiNozzo, a womanizer, movie buff, and former Baltimore homicide detective, who despite being the class clown always gets the job done. Also on the team is field agent Ziva David, a former Mossad officer who is an expert combatant. Agent Timothy McGee, a computer expert and writer who became famous for a Best Seller on NCIS work, using his own co-workers, only changing their names in the book. Helping them in the lab is Abby Sciuto who is like a daughter to Gibbs. In the forensic study is Dr. Donald Mallard, who always has an anecdote of his life in each case and talks with the dead at the work table. The doctor is aided by medical student Jimmy Palmer.

The team is frequently assigned high-profile cases such as the death of the President’s nuclear missile adviser, a bombing event on a Navy warship, the death of a celebrity during a reality show on a United States Marine Corps base States, terrorist threats and kidnappings. However, they can be assigned any type of criminal case, as long as it is related to the Marine Corps or the Navy.

Actors and Characters

  • Mark Harmon: Special Agent Jethro Gibbs
  • Michael Weatherly: Special Agent Anthony DiNozzo
  • Cote de Pablo: Agent Ziva David
  • Pauley Perrette: Abby Sciuto
  • David McCallum: Dr. Donald Mallard
  • Sean Murray: Special Agent Timothy McGee
  • Brian DietzenJimmyPalmer


  • Season 1: 2003-2004
  • Season 2: 2004-2005
  • Season 3: 2005-2006
  • Season 4: 2006-2007
  • Season 5: 2007-2008
  • Season 6: 2008-2009
  • Season 7: 2009-2010
  • Season 8: 2010-2011
  • Season 9: 2011-2012
  • Season 10: 2012-2013
  • Season 11: 2013-2014
  • Season 12: 2014-2015
  • Season 13: 2015-2016
  • Season 14: 2016-


All You Need to Know About IPv4

All You Need to Know About IPv4

According to abbreviationfinder, Ipv4 stands for Internet Protocol version 4 which is the fourth version of the Internet Protocol (IP), and the first to be implemented on a large scale. Defined in RFC 791.


The TCP/IP protocol is the protocol used to manage data traffic on the network. This protocol is actually made up of two different protocols that perform different actions.

On the one hand, there is the TCP protocol, which is in charge of data transfer control, and on the other, there is the IP protocol, which is in charge of identifying the machine on the network.

Use of IPv4

IPv4 uses 32 -bit addresses, limiting it to 232 = 4,294,967,296 unique addresses, many of which are dedicated to local area networks (LANs). Due to the enormous growth that the Internet has had (much more than I expected, when IPv4 was designed), combined with the fact that there is a waste of addresses in many cases, it was already several years ago that IPv4 addresses were scarce.

This limitation helped spur the push towards IPv6, which is currently in the early stages of deployment and is expected to eventually replace IPv4.

The addresses available in the IANA global pool belonging to the IPv4 protocol were officially exhausted on Thursday, February 3, 20111 The Regional Internet Registries must, from now on, handle their own pools, which are estimated to last until September 2011

Currently there are no IPv4 addresses available for purchase, therefore there is a forced and priority obligation to migrate to IPv6, operating systems Windows Vista, 7, Unix/like (Gnu/linux, Unix, Mac OSX), BSD among others, they have native support for IPv6, while Windows XP requires using the prompt and typing ipv6 install to install it, and older systems do not have support for it.

Waste of addresses

The waste of IPv4 addresses is due to several factors.

One of the main ones is that initially the enormous growth that the Internet was going to have was not considered; large address blocks (of 16.271 million addresses) were assigned to countries, and even to companies.

Another reason for waste is that in most networks, except the smallest, it is convenient to divide the network into subnets. Within each subnet, the first and last addresses are not usable; however, not all remaining addresses are always used. For example, if you want to accommodate 80 hosts in a subnet, you need a 128-address subnet (round to the next power of 2); in this example, the remaining 48 addresses are no longer used.

Transition to IPv6

Actually we are all involved. From end users to developers of software and operating systems, network and communications hardware, and in general, all kinds of entities.

The Internet is a network that does not have an “executive management” as such, there is no single command, but rather we are all the Network.

In the organization that deals with the standardization of Internet protocols, the IETF (Internet Engineering Task Force) we have fulfilled the mission, and about 15 years ago we began work to develop the new IPv6 protocol, and since approximately 2002 we can say that the base of the protocol, compared to IPv4, is fully finished and tested. ISOC (Internet Society), to which the IETF administratively depends, has also supported the development and deployment of IPv6.


All You Need to Know About CD (Compact Disc)

All You Need to Know About CD (Compact Disc)

compact disc (popularly known as CD or cedé, for the acronym in English compact disc according to abbreviationfinder) is an optical digital medium used to store any type of information (audio, images, video, documents and other data).

Today, it remains the preferred physical medium for audio distribution.

CD in Spanish language

In Spanish or Castilian you can already write «cedé» (how it is pronounced) because it has been accepted and lexicalized by use; In much of Latin America it is pronounced [sidí], as in English, but the Pan-Hispanic Dictionary of Doubts (of the Association of Academies of the Spanish Language) advises against this pronunciation. The anti- etymological «cederrón» (which comes from the acronym CD-ROM: compact disc-read only memory: ‘ compact disc-read-only memory ‘) is also accepted.


The compact disc was created in 1979 by the Japanese Toshitada Doi (from the Sony company) and the Dutch Kees Immink (from the Philipscompany). The following year, Sony and Philips, which had developed the Compact Disc digital audio system, began distributing compact discs, but sales were unsuccessful due to the economic slump at the time. So they decided to embrace the higher quality classical music market. The launch of the new and revolutionary audio recording format began, which would later be extended to other sectors of data recording.

The optical system was developed by Philips while the Digital Reading and Coding was carried out by Sony, it was presented in June 1980 to the industry and 40 companies from all over the world joined the new product by obtaining the corresponding licenses for the production of players and records. In 1981, the conductor Herbert von Karajan, convinced of the value of compact discs, promoted them during the Salzburg festival (Austria) and from that moment their success began. The first titles to be recorded on compact discs in Europe were the Alpine Symphony (by Richard Strauss), the waltzes by Frederic Chopin (performed by Chilean pianist Claudio Arrau) and the album The Visitors (by Swedish pop group ABBA).

In 1983, the American company CBS (which today belongs to the Japanese Sony Music) released the first compact disc in the United States: an album by pop singer Billy Joel.

The production of compact discs was centralized for several years in the United States and Germany from where they were distributed throughout the world. In the nineties, factories were set up in various countries as an example, in 1992 Sonopress produced the first CD in Mexico, entitled De mil colores (by Daniela Romo). In 1984 compact discs were opened to the world of computing, allowing storage of up to 700 MB.

The diameter of the central perforation of the compact discs was determined in 15 mm, when between meals, the creators were inspired by the diameter of the 10 cent guilder from Holland. In contrast, the diameter of compact discs is 12 cm, which corresponds to the width of the top pockets of men’s shirts, because according to Sony’s philosophy, everything had to fit there.

Recordable media

The three recordable media that rely on optical reading are: CD-ROM, CD-R, and CD-RW. CD ROM and CD-R can only be burned once. CD-WRs allow multiple reading and recording.

Physical details

Although there may be variations in the composition of the materials used to make the discs, they all follow the same pattern: compact discs are made from a 1.2-millimeter thick disc of plastic polycarbonate, which a reflective layer of aluminum is added, used to obtain more longevity of the data, which will reflect the laser light (in the infrared spectrum range and therefore not visually appreciable); subsequently a protective layer of lacquer is added, which acts as a protector for the aluminum and, optionally, a label on the upper part.

Common printing methods on CDs are screen printing and offset printing. In the case of CD-R and CD-RW, gold, silver and their alloys are used, which due to their ductility allows lasers to record on it, something that could not be done on aluminum with low-power lasers.


The CD-Roms They are made up of a spiral track that has the same number of bits per centimeter in all its sections (constant linear density), to make better use of the storage medium, and not waste space as happens in magnetic disks. This is why when reading and writing a CD, as the laser beam moves away from the center of the disc, the speed must decrease, since in the center the spiral is shorter than in the edges. By alternating the speeds, it is achieved that the amount of bits read per second is constant in any section, be it in the center or at the edges. If this speed were constant, fewer bits per second would be read if the zone is closer to the center, and more if it is closer to the edges. All of this means that a CD rotates at a variable angular speed.

In order to achieve that the CDs have the same density in any section of the spiral, in the recording, the laser beam emitted by the head (which moves in a radial straight line from the center to the edge of the platter) generates the spiral at constant linear speed (CLV), this means that the number of recorded bits per second will be constant. But in order to achieve this, and to maintain a constant linear density and spiral track, it will be necessary for the CD to rotate at a variable angular speed (explained above). Therefore, by spinning a CD at a variable angular speed, and being written at a constant linear speed, the same amount of bits are written and read per second and per centimeter, whatever the position of the CD. Whereas each turn of the spiral will contain more or less bits depending on whether it is closer to the center or to the edge.

Standard CDs are available in different sizes and capacities, so we have the following variety of discs:

  • 120mm (diameter) with 74-80 minutes of audio time and 650–700MB of data capacity.
  • 120mm (diameter) with a duration of 90–100 minutes of audio and 800-875MB of data (not on the market today).
  • 80mm (diameter), which were initially designed for CD singles. These can store about 21 minutes of music or 210 MB of data. They are also known as “mini-CDs” or “pocket CDs.”

A standard CD-ROM can hold 650 or 700 (sometimes 800) MB of data. The CD-ROM is popular for the distribution of software, especially multimedia applications, and large databases. A CD weighs less than 30 grams. To put CD-ROM memory in context, an average novel contains 60,000 words. Assuming that an average word has 10 letters – in fact it is considerably less than 10 letters – and each letter occupies one byte, a novel would therefore occupy 600,000 bytes (600 kB).

A CD can therefore contain more than 1000 novels. If each novel occupies at least one centimeter on a shelf, then a CD can hold the equivalent of more than 10 meters on the shelf. However, the textual data can be compressed ten times more, using compression algorithms, therefore a CD-ROM can store the equivalent of more than 100 meters of shelf.


Once the problem of storing the data has been solved, it remains to interpret it correctly. To do this, the companies that created the compact disc defined a series of standards, each of which reflected a different level. Each document was bound in a different color, giving name to each one of the ” Rainbow Books”.

CD (Compact Disc)

All You Need to Know About RSA

All You Need to Know About RSA

RSA. It was created in 1978 by Ron Rivest, Adi Shamir, and Len Adleman of the Massachusetts Institute of Technology (short for MIT according to abbreviationfinder); the letters RSA are the initials of their surnames, and it is the best known and most used asymmetric cryptographic system. These gentlemen were based on the Diffie-Hellman article on public key systems.


The RSA public key algorithm was created in 1978 by Ron Rivest, Adi Shamir, and Len Adleman of the Massachusetts Institute of Technology(MIT); the letters RSA are the initials of their surnames, and it is the best known and most used asymmetric cryptographic system. These gentlemen were based on the Diffie-Hellman article on public key systems.

These keys are calculated secretly on the computer where the private key is to be stored, and once it is generated, it should be protected using a symmetric cryptographic algorithm.


Regarding key lengths, the RSA system allows variable lengths, being currently advisable to use keys of no less than 1024 bits (keys of up to 512 bits have been broken, although it took more than 5 months and almost 300 computers working together to do it).


RSA bases its security on being a computationally safe function, since although performing modular exponentiation is easy, its inverse operation, the extraction of roots of modulus Ø is not feasible unless the factorization of e is known, the system’s private key.. RSA is the best known and most widely used of the public key systems, and also the fastest of them.


It has all the advantages of asymmetric systems, including the digital signature, although the use of symmetric systems is more useful when implementing confidentiality, as they are faster. It is also often used in mixed systems to encrypt and send the symmetric key that will be used later in encrypted communication.

RSA algorithm

The algorithm consists of three steps: key generation, encryption, and decryption.

Padding schemes

RSA must be combined with some version of the padding scheme, otherwise the value of M can lead to insecure ciphertext. RSA used without padding scheme could suffer many problems.

  • The value m = 0 or m = 1 always produces the same ciphertext for 0 or 1 respectively, due to properties of the exponents.
  • When we code with small exponents (e = 3) and small values ​​of m, the result of m could be strictly less than the modulus of n. In this case, the ciphertext could be easily decrypted, taking the e-th root of the ciphertext regardless of the module.
  • Since RSA encryption is a deterministic algorithm (it has no random components) an attacker can successfully launch a chosen text attack against the cryptosystem, building a dictionary of probable texts with the public key, and storing the encrypted result. By observing the encrypted texts in a communication channel, the attacker can use this dictionary to decrypt the content of the message.

In practice, the first of the two problems could arise when we send small ASCII messages where m is the concatenation of one or more ASCII encoded character / s. A message consisting of a single ASCII NUL character (whose value is 0) would be encoded as m = 0, producing a ciphertext of 0 no matter what values ​​of e and N are used. Probably a single ASCII SOH (whose value is 1) would always produce a ciphertext of 1. For conventional systems using small values ​​of e, such as 3, a single ASCII character message encoded using this scheme would be insecure, since the maximum value of m would be 255, and 255³ is less than any reasonable modulus. In this way the clear texts could be recovered simply by taking the cube root of the ciphertext. To EVITED these problems, the practical implementation of RSA is helped by some structures, use of randomized padding within the value of m before encryption. This technique ensures that m will not fall into the range of insecure clear texts, and that given a message, once it is filled in, it will encrypt one of the large numbers of the possible encrypted texts. The last feature is the increase in the dictionary, making it intractable when carrying out an attack.

The RSA-padding scheme must be carefully designed to prevent sophisticated attacks which could be facilitated by the predictability of the message structure. Examples of fill scheme used with RSA:

  • RSA-OAEP (Optimal Asymetric Encryption Padding) or its modified version RSA-OAEP +. This type of padding is used for example in PKCS # 1 and in the TOR anonymity network
  • RSA-SAEP + (Simplified Asymmetric Encryption Padding)
  • RSA-PSS (Probabilistic Signature Scheme). Used for example in PKCS # 1

Message authentication

RSA can also be used to authenticate a message. Suppose Alice wants to send an authenticated message to Bob. She produces a hash value of the message, raises it to the power of d≡ mod n (as she does when decrypting messages), and attaches it to the message as a “signature.” When Bob receives the authenticated message, he uses the same hashing algorithm in conjunction with Alice’s public key. Raises the received signature to the power of e≡ mod n (as it does when encrypting messages), and compares the hash result obtained with the hash value of the message. If the two match, he knows that the author of the message was in possession of Alicia’s secret key, and that the message has not been tampered with (it has not suffered attacks).

It should be noted that the security of padding-schemes such as RSA-PSS are essential for both signature security and message encryption, and that the same key should never be used for encryption and authentication purposes.


All You Need to Know About CEO, CFO, CIO, COO

All You Need to Know About CEO, CFO, CIO, COO

When we speak of CEO, CFO, CIO, COO and other acronyms, we refer to the main senior positions of a company that are part of the C-Level or C-Suite range . Although Spanish speakers have different names for these jobs, the acronyms from English are currently used. In this article we will define what they mean, what type of charges they are and what function they have.

What does a CEO do?

According to abbreviationfinder, the CEO, or Chief Executive Officer, is the person in charge of directing the organization, its head at the operational level, in Spanish it is the general director or executive director. As you know, it is one of the best known acronyms in the business world and due to its importance we have dedicated an exclusive article to it so that you can get a clear idea of ​​what a CEO is and what his functions are.

What functions does a COO have?

COO (Chief Operating Officer). Director of Operations, oversees how the company’s product creation and distribution system is working to ensure that all systems work well. Working as COO often serves as training for the position of CEO: it is a natural step, since the director of operations already understands the mission and objectives of the company and is the right hand of the executive director.

What does CMO mean? what is he in charge of?

CMO (Chief Marketing Officer). He is responsible for Marketing activities, which include sales management, product development, advertising, market research and customer service. His main concern is to maintain a stable relationship with end customers and communicate with all other departments to get involved in Marketing activities.

What is a CFO?

CFO (Chief Financial Officer). Financial Director, is in charge of the economic and financial planning of the company. He is the one who decides the investment, financing and risk with the aim of increasing the value of the company for its owners (whether they are shareholders or partners). He brings financial and accounting knowledge and, in general, an analytical view of the business. In many cases, the CFO is also the strategic advisor to the CEO.

What is a CISO?

the profileCISO – Chief Information Security Officer, is the executive responsible for the information and data security of an organization. While in the past the role was fairly narrowly defined in that regard, today this concept is often used interchangeably with chief security officer and vice president of security, indicating a broader role in the organization..

What is a CIO? what is he responsible for?

CIO (Chief Information Officer). Responsible for the company’s information technology systems at the process level and from the planning point of view. The CIO analyzes what benefits the company can derive from new technologies, identify which ones are of most interest to the company, and evaluate their performance. It focuses on improving the efficiency of internal processes in order to ensure effective communication and keep the organization running efficiently and productively.

What is a CTO and what does he do?

CTO (Chief Technology Officer). Technically responsible for the development and proper functioning of information systems from the point of view of execution. He is generally responsible for the engineering team and for implementing the technical strategy to improve the final product. Sometimes the positions of CTO and CIO can be confused, since in some companies they share tasks.

The key difference is that a CIO focuses on information systems (communication workflow), with the goal of increasing efficiency, while a CTO is responsible for technology strategy aimed at improving the end product.

What is a CCO and what is it responsible for?

CCO (Chief Communications Officer). He is in charge of managing corporate reputation, contacting the media and developing Branding strategies. He knows the media and has a good relationship with them so that the brand is visible and, whenever possible, associated with positive messages. The CCO must play a relevant role in its communication management so that the corporate image is good or, in other words, that the opinion of potential clients is favourable.

Other acronyms such as CGO, CRO, CXO, CDO

In the digital world you have to be vigilant because the work environment is becoming more international and is influenced by a predominant Anglo-Saxon world. That is why you need to know new professional profiles that did not even exist before and that have kept their Anglo-Saxon name, how to:

  • CGO – Chief Growth Officer
  • CXO – Chief Customer Experience Officer
  • CDO – Chief Digital Officer
  • CRO – Chief Remote Officer


All You Need to Know About Anxiety

All You Need to Know About Anxiety

To fully understand the term that concerns us now, it is necessary to proceed to discover its etymological origin. In this case we can state that it derives from the Catalan word “congoixa”, which, in turn, comes from the Latin “congustia”. A word the latter that is the result of the sum of two differentiated parts:
-The prefix “with”, which means “together”.
-The noun “anguish”, which is synonymous with “anguish” and “drowning”.

The term grief refers to anxiety, sadness or anguish. Whoever is distressed suffers from a mental malaise. For example: “Conch due to the death of the actress”, “Broken by grief, the woman leaned back on the sofa, not wanting to do anything”, “With great anguish I had to inform the workers that the company will close its doors due to economic problems ”.

According to DigoPaul, anxiety is a common emotion that arises in certain situations that cause pain. The feeling can be externalized through crying or remain inside the individual.

Anxiety usually appears in the face of uncertainty and is linked to anxiety. When a person is fired from his job, to cite one case, he may feel heartbreak as he worries about how he will be able to meet his material needs without receiving a salary. The future, with this outlook, is uncertain and makes the subject have difficulty managing anxiety. This negative charge can translate into different negative emotions, including heartbreak.

In some people, grief resembles fatigue or listlessness, making the emotional sufferer want to stay home or in bed most of the time. Others in distress, on the other hand, move excessively and frantically, unable to control their anxiety.

In order to overcome that grief, a series of tips and recommendations should be taken into account such as these:
-The person must recognize that he is suffering and also the reason that causes it.
-Of course, you have to be more affectionate with yourself, love yourself and pamper yourself a little more.
-You do not have to “close in band” to others. Trust loved ones and ask for help and support.
-In order to overcome grief, it is essential that the person is aware of their situation, of the cause that has caused it and also that they are clear that they want to overcome it.

Feeling distress at certain times or situations is normal: it is a reaction to something that affects us. But if this emotion extends too much or is combined with other similar ones, the person may require psychological assistance to get ahead.

In addition to all the above, we cannot ignore either that anguish is the name of a medicinal plant native to Mexico that is widely used. Thus, we can highlight that it is used by cooking its leaves because these will serve both to heal wounds in the throat and to end earaches and even to heal bleeding ulcers.

All You Need to Know About Anxiety

All You Need to Know About Conduit

All You Need to Know About Conduit

Originating from the Latin word conductus (“conducted”), the word conduit describes a channel that serves for water and other fluids or objects to pass, move and have an outlet. To cite some examples: “We have to call a plumber since the bathroom duct is blocked”, “The air conditioning equipment has a duct that it uses to drain and that must always be kept clean to avoid problems in operation”, ” The cat got into the air duct and we had to ask the fire department for help to get it out ”.

According to DigoPaul, the ducts are also the tubes or channels that are noticed in the bodies of living beings and that guarantee the development of multiple physiological functions. One of these tubes is the so-called mammary duct, present in women to ensure that milk passes from the breast lobes to the nipple.

The nasolacrimal duct, for its part, carries tears from the lacrimal sac to the nasal cavity. Another important canal is the outer auditory canal, a cavity of the ear that has the function of making sounds travel through the pinna and reach the eardrum.

In men, the seminiferous ducts are where sperm are produced. The ejaculatory ducts, on the other hand, are those that allow the semen to reach the penis, where it is expelled.

And all this without forgetting what is known as the cystic duct, which is the one that is responsible for releasing all the products that circulate through the gallbladder, or the so-called vas deferens which is the one found in each of the testicles of the male human being and whose clear function is to function as an ejaculator and excretor.

The liver that is in the liver, the inguinal that is in the abdomen or the spinal that can be found in the spinal cord are other ducts that are also part of the anatomy of the body of a man or a woman.

On the other hand, it is necessary to underline the existence of a term that uses the concept that we are analyzing as an integral part. We are referring to the word safe conduct, which is a document that is issued by a competent authority and whose clear objective is that the person who possesses it can circulate through a dangerous territory without running any type of risk because that authority has weight and is recognized in that area.

Thus, for example, during the Middle Ages it was common for monarchs to make this type of document to grant it to their knights or vassals with the clear objective that they could pass through the different cities of their kingdom without anything or anyone endangering their lives.

In addition to this, we can also state that safe conduct is that freedom that is given to someone to do something without fear of being punished.

Conduit, in another sense, is the intervention of a person to resolve a matter: “The official managed to leave the country through the mayor of the city, who interceded with the border authorities”, “The conduit that your father gave us it was a great help ”.

All You Need to Know About Conduit

All You Need to Know About Fatty Heart

All You Need to Know About Fatty Heart

The term fatty heart disease, also called fatty heart or lipomatosis, describes various diseases of the heart region. This causes connective tissue to transform into fat cells. This can have various causes, such as damage to the heart muscle tissue or obesity.

What is fatty heart?

According to DigoPaul, a fatty heart is either a side effect of obesity or an independent degeneration of the heart muscle. In the case of obesity, the right ventricle is particularly affected, which can lead to right heart failure. However, myocardial damage can also occur , for example as a result of chronic alcohol abuse.

The fatty degeneration also affects the left ventricle and is occasionally accompanied by dilated cardiomyopathy. The term must be distinguished from the so-called fatty myocardial degeneration, which occurs, among other things, in arrhythmogenic right ventricular cardiomyopathy.

Furthermore, it is important to distinguish fatty degeneration from coronary heart disease (“calcification” or “fatty degeneration” of the coronary arteries), for which the term is sometimes incorrectly used as a synonym.


As a side effect of general obesity, the heart is surrounded by a thick layer of fat in the case of fatty heart disease. If the disease occurs as an independent degeneration of the heart muscle, this is the result of the gradual conversion of muscle tissue into fatty tissue. The main causes of fatty degeneration in the heart are a high-fat, high-calorie diet and alcohol abuse.

However, persistent overstraining of the heart and diseases of the heart blood vessels can also cause the syndrome. Long-lasting high fever is another risk factor for fatty heart disease. This occurs, for example, in typhus, smallpox or pyaemia. The diseases that can provoke the development of the disease include anemia, pulmonary tuberculosis, scurvy and protracted suppuration and bleeding. Women and older people are particularly affected by fatty heart disease.

Symptoms, Ailments & Signs

The side effects of fatty heart disease include coronary symptoms such as palpitations and cardiac insufficiency. But general symptoms such as shortness of breath, easy fatigue, asthma, shortness of breath, anxiety, fainting spells and dizziness can also be signs of the disease.

Fatty heart disease as a result of obesity begins with right heart failure. This causes different symptoms, such as congested and dilated neck veins, edema, congested kidneys or congested gastritis. If the left ventricle is affected, this can lead to dilated cardiomyopathy. Among other things, this causes progressive left- sided heart failure, cardiac arrhythmia, embolism and Cheyne-Stokes breathing as a sleep-related breathing disorder.

Diagnosis & disease progression

A fatty heart usually initially leads to right heart failure, which spreads to the entire heart over time. Dilated cardiomyopathy often develops as a long-term consequence. Right heart failure can be diagnosed clinically.

The enlargement of the heart can be shown with the help of an echocardiography and a chest X -ray. Widening of the azygos vein and superior vena cava including the right atrium can be observed at diagnosis. The heart shifts to the left with elevation of the heart apex when the right heart is enlarged.

Dilated cardiomyopathy can also be diagnosed by echocardiography. The dilatation of the ventricles and the left atrium, hypokinesia and wall movement disorders can be determined. An MRI examines anatomy, heart function, and valve function. Biopsy and pathohistology may be used to rule out ischemic causes.

While fatty heart disease can be treated well in the early stages, a severe course with sudden onset of cardiac paralysis can be fatal. Therefore, a doctor should be consulted as soon as the first signs appear. This is the only way to prevent the obesity from causing irreparable damage to the heart.


A fatty heart can cause a number of complications. First, a fatty heart leads to circulatory problems such as high blood pressure, sweating and palpitations. These symptoms are usually accompanied by shortness of breath, fatigue and dizziness. This is often associated with a decrease in general well-being and, depending on the degree of fatty heart disease, the development of psychological problems.

In the further course, right-sided heart failure can also develop, which can later develop into complete heart failure. If the left ventricle is affected, dilated cardiomyopathy can develop later. This can lead to left ventricular failure, cardiac arrhythmia and the development of embolism.

Sleep-related breathing disorders such as Cheyne-Stokes breathing can also occur. A fatty heart as a result of obesity can also lead to dilated neck veins, edema and congested kidneys. In general, fatty heart disease increases the risk of heart attacks and other life-threatening complications.

If the causative disease is not treated, permanent cardiac insufficiency usually develops, which in turn is associated with symptoms. In the case of medical treatment of a fatty heart, major complications are unlikely. Only with rapid fat loss cures and zero diets is there a risk of overloading the heart.

When should you go to the doctor?

If there is increased shortness of breath, shortness of breath, dizziness or palpitations, there may be fatty heart disease. A doctor should be consulted if symptoms persist for more than a few days or other symptoms develop. Disturbances of consciousness and fainting spells must be clarified immediately by a doctor. If there are major symptoms, such as persistent shortness of breath or palpitations, this should also be examined promptly. People who are overweight are particularly at risk.

Patients who eat a generally unhealthy diet, drink a lot of alcohol or have a disease of the metabolic system are particularly likely to develop fatty heart disease. Anyone who counts themselves among these risk groups should consult a doctor if they experience the symptoms mentioned. If there is a suspicion of pronounced cardiac insufficiency, the disease may already be far advanced. Then you should see your family doctor immediately. Other contacts are the cardiologist or a specialist in internal diseases. In case of doubt, the medical emergency service can be contacted first. If the symptoms are severe, we recommend calling the emergency services.

Treatment & Therapy

In the initial stages of fatty heart disease, it is important to take countermeasures quickly in order to stop the progression of the disease. For this purpose, emotional and psychological stress and excessive physical exertion should be avoided. In order to strengthen the heart muscle and prevent the formation of further fat tissue, daily walks with a slowly increasing level of exertion are recommended.

Systematic remedial gymnastics can also drive the recovery process under medical supervision. A long stay in the fresh forest or mountain air is just as beneficial as strict adherence to a diet. Strong alcoholic drinks, coffee, tea or drinking too much water should be avoided, as these put a strain on the cardiovascular system.

While sugar, pastries and potatoes should be eliminated from the menu, the enjoyment of vegetables and fruit is recommended. In any case, the treatment should be supervised by a doctor instead of setting up a therapy plan on your own. Rapid degreasing treatments are not recommended. This causes the fatty tissue around the heart to be eliminated too quickly, causing the heart to lose its support.

Possible consequences are heart enlargement and cardiac insufficiency. In cases that are not too advanced, a careful defatting treatment under medical supervision can lead to the complete healing of the patient. Treatment in later stages is more difficult and usually only promises an alleviation of the symptoms.

Outlook & Forecast

The prospects of recovery from fatty heart disease differ depending on the stage of the disease and the patient’s initiative. At an early stage, the disease can be contained by taking regular walks in the fresh air and eating a balanced diet. Stress and excessive physical exertion are not recommended, as these can put too much strain on the heart and cause other diseases. The diet should be limited to a strict diet, without the consumption of acidic foods such as coffee, sugar and alcoholic beverages.

A stay in the mountains can also support healing. Here, for example, a stay in a spa facility is an option. In addition to sufficient exercise, the focus here is also on nutrition. If these criteria are met, the prognosis at this early stage of the disease is very positive.

However, if the fatty degeneration is advanced, the prospect of complete recovery is very poor. In most cases, this is only about curbing the symptoms and not making them worse. Other serious diseases should be avoided, which is why close medical care is necessary. However, the prognosis is bad. For this reason, a doctor should be consulted at the first sign of the disease.


In order to prevent fatty heart disease as a result of obesity, a health-promoting diet should come first. This includes the extensive renunciation of high-fat food, or their moderate consumption. Fruits and vegetables should make up the bulk of meals to prevent excess fat accumulation.

The fatty heart that develops in the course of alcohol abuse can be prevented by moderate alcohol consumption. In any case, stress should be avoided as much as possible and exercise in the fresh air encouraged. Diseases that put a strain on the cardiovascular system should be treated by a doctor as soon as possible to rule out long-term consequences.


In the case of fatty heart disease, the patient usually only has very few follow-up measures available. The disease should generally be prevented to avoid this complication. In the worst case, the fatty degeneration of the heart can lead to the death of those affected if they are not treated properly. First and foremost, however, the cause of this obesity must be identified and treated so that the symptoms can be properly limited.

Self-healing does not occur in this case. The treatment itself depends very much on the exact cause of the fatty heart disease, with the doctor usually drawing up a nutrition plan for the patient. This must be strictly observed. In general, a healthy lifestyle with a healthy diet and physical activity also has a positive effect on the course of the disease.

Those affected should also refrain from smoking or consuming alcohol. Since fatty heart disease generally weakens the heart, regular examinations by a doctor should take place. Whether the disease reduces life expectancy depends heavily on the severity of this obesity. Contact with other people affected by the disease can also be useful, as this can lead to an exchange of information.


In the case of fatty heart disease, the patient usually only has very few follow-up measures available. The disease should generally be prevented to avoid this complication. In the worst case, the fatty degeneration of the heart can also lead to the death of the person concerned if it is not treated properly.

First and foremost, however, the cause of this obesity must be identified and treated so that the symptoms can be properly limited. Self-healing does not occur in this case. The treatment itself depends very much on the exact cause of the fatty heart disease, with the doctor usually drawing up a nutrition plan for the patient. This must be strictly observed.

In general, a healthy lifestyle with a healthy diet and physical activity also has a positive effect on the course of the disease. The person concerned should also refrain from smoking or consuming alcohol. Since fatty heart disease generally weakens the heart, regular examinations by a doctor should take place.

Whether the disease reduces life expectancy depends very much on the extent of this obesity. Contact with other people affected by the disease can also be useful, as this can lead to an exchange of information.

You can do that yourself

One of the main causes of lipomatosis (fatty heart) is the wrong diet, especially one that is high in fat and energy, and permanent excessive alcohol consumption. In these cases, the patient himself can do a great deal to improve his state of health.

If the fatty heart is due to being overweight, a consistent change in lifestyle is essential. But this is very difficult for many people. The support of the family doctor is usually not enough. Since a lack of knowledge and a lack of motivation are often the main causes of severe obesity, those affected are best looking for professional help. With a nutritionistlearn which foods are healthy and which foods are better avoided. You will also receive a nutrition plan that is tailored to your health problems and your individual life situation. If necessary, those affected also learn how to prepare healthy food correctly. In addition, it helps many overweight people to join a self-help group, since weight reduction is a lengthy and tough process, especially in the case of severe obesity.

In addition to the right diet, regular physical exercise also plays an important role. If the weight already limits mobility, water sports, especially swimming and water aerobics, are recommended. In larger cities there are also gyms that specialize in overweight people. Training with special equipment is particularly effective and the membership fees usually motivate people to actually use the paid offer.

Anyone suffering from alcohol addiction should start therapy as soon as possible, although alcoholics also benefit from membership in an (anonymous) self-help group.

Fatty Heart

All You Need to Know About LCD

All You Need to Know About LCD

A display LCD (liquid crystal display: ‘ LCD ‘ for short English according to abbreviationfinder) is a thin, flat screen formed by a number of pixels color or monochrome placed in front of a light source or reflector. It is often used in battery-operated electronic devices as it uses very small amounts of electrical energy.

It is present in countless devices, from the limited image that a pocket calculator shows to televisions of 50 inches or more. These screens are composed of thousands of tiny liquid crystals, which are not solids or liquids in reality but an intermediate state.


Each pixel on an LCD typically consists of a layer of molecules aligned between two transparent electrodes, and two polarization filters, the axesof each being (in most cases) perpendicular to each other. Without liquid crystal between the polarizing filter, the light passing through the first filter would be blocked by the second (crossing) polarizer.

The surface of the electrodes that are in contact with the liquid crystal materials is treated in order to adjust the liquid crystal molecules in a particular direction. This treatment is usually normally applicable consists of a thin layer of polymer that is rubbed unidirectionally using, for example, a cloth. The direction of the liquid crystal alignment is defined by the rubbing direction.

Before the application of an electric field, the orientation of the liquid crystal molecules is determined by adaptation to the surfaces. In a twisted nematic device, TN (one of the most common devices among liquid crystal devices), the surface alignment directions of the two electrodes are perpendicular to each other, thus organizing the molecules in a helical structure., or twisted. Because the material is birefringent liquid crystal, the light that passes through one polarizing filter is turned by the liquid crystal helix that passes through the liquid crystal layer, allowing it to pass through the second polarized filter.. Half of the incident light is absorbed by the first polarizing filter, but otherwise the entire assembly is transparent.

When a voltage is applied across the electrodes, a turning force orients the liquid crystal molecules parallel to the electric field, which distorts the helical structure (this can be resisted thanks to elastic forces since the molecules are confined to the surfaces). This reduces the rotation of the polarization incident light, and the device appears gray. If the applied voltage is large enough, the liquid crystal molecules in the center of the layer are almost completely unwound and the polarization of the incident light is not rotated as it passes through the liquid crystal layer. This light will be mainly polarized perpendicular to the second filter, and therefore it will be blocked and the pixel will appear black. By controlling the applied voltage across the liquid crystal layer at each pixel, light can be allowed to pass through different amounts, constituting different shades of gray. The optical effect of a twisted nematic (TN) device in the voltage state is much less dependent on the thickness variations of the device than in the offset voltage state. Because of this, These devices are often used between crossed polarizers in such a way that they appear bright without voltage (the eye is much more sensitive to variations in the dark state than in the bright state). These devices can also work in parallel between polarizers, in which case light and dark are reversed states. The offset stress in the dark state of this configuration appears reddened due to small variations in thickness throughout the device. Both the liquid crystal material and the alignment layer material contain ionic compounds. If an electric field of a certain polarity is applied for a long period, this ionic material is attracted to the surface and the performance of the device is degraded. This is to be avoided,

When a device requires a large number of pixels, it is not feasible to drive each device directly, thus each pixel requires a separate number of electrodes. Instead, the screen is multiplexed. In a multiplexed display, the electrodes on the side of the display are grouped together with the cables (usually in columns), and each group has its own voltage source. On the other hand, the electrodes are also grouped (usually in rows), where each group gets a sink voltage. The groups have been designed so that each pixel has a unique and dedicated combination of sources and sinks. The electronic circuits or the software that controls them, activates the sinks in sequence and controls the sources of the pixels in each sink.


Important factors to consider when evaluating an LCD screen:


The horizontal and vertical dimensions are expressed in pixels. HD displays have a native resolution of 1366 x 768 pixels (720p) and the native resolution on Full HD is 1920 x 1080 pixels (1080p).

Stitch width

The distance between the centers of two adjacent pixels. The smaller the dot width, the less granularity the image will have. The point width can be the same vertically and horizontally, or different (less common).


The size of an LCD panel is measured along its diagonal, usually expressed in inches (colloquially called the active display area).

Response time

It is the time it takes for a pixel to change from one color to another

Matrix type

Active, passive and reactive.

Vision angle

It is the maximum angle at which a user can look at the LCD, it is when it is offset from its center, without losing image quality. The new screens come with a viewing angle of 178 degrees.

Color support

Number of supported colors. Colloquially known as color gamut.


The amount of light emitted from the screen; also known as luminosity


The ratio of the brightest to the darkest intensity.


The ratio of the width to the height (for example, 5: 4, 4: 3, 16: 9, and 16:10).

Ports of entry

For example DVI, VGA, LVDS or even S-Video and HDMI.

Brief history

  • 1887: Friedrich Reinitzer (1858-1927) discovered that cholesterol extracted from carrots is a liquid crystal (that is, he discovers the existence of two melting points and the generation of colors), and published his findings at a meeting of the Chemical Society Vienna on May 3, 1888 (F. Reinitzer: Zur Kenntniss de Cholesterins, Monatshefte für Chemie (Wien / Vienna) 9, 421-441 (1888)).
  • 1904: Otto Lehmann publishes his work Liquid Crystals.
  • 1911: Charles Mauguin describes the structure and properties of liquid crystals.
  • 1936: The Marconi Wireless Telegraph company patents the first practical application of the technology, the “liquid crystal light valve”.
  • 1960 to 1970: Pioneering work on liquid crystals was carried out in the 1960s by the Royal Radar Establishment, in the city of Malvern, UK. The RRE team supported ongoing work by George Gray and his team at the University of Hull, who eventually discovered cyanobiphenyl in liquid crystals (which had the correct stability and temperature properties for application in LCD displays).
  • 1962: The first major publication in English on the topic Molecular structure and properties of liquid crystals, by Dr. George W. Gray.
    Richard Williams from RCA found that there were some liquid crystals with interesting electro-optical characteristics and he realized the electro-optical effect by generating band patterns in a thin layer of liquid crystal material by applying a voltage. This effect is based on a hydrodynamic instability formed, what is now called ” Williams domains ” inside the liquid crystal.
  • 1964: In the fall of 1964 George H. Heilmeier, while working in the RCA laboratories on the effect discovered by Williams, he noticed the color switching induced by the readjustment of dichroic dyes in a homeotropically oriented liquid crystal. The practical problems with this new electro-optical effect caused Heilmeier to continue working on the effects of dispersion in liquid crystals and, finally, the realization of the first functioning liquid crystal display based on what he called the dynamic mode dispersion (DSM). Applying a voltage to a DSM device initially changes the transparent liquid crystal into a milky, cloudy, state layer. DSM devices could operate in transmission and reflection mode, but require considerable current flow to operate.
  • 1970: The 4 of December of 1970, the patent of the effect of the field “twisted nematic” liquid crystals was filed by Hoffmann-LaRoche in Switzerland (Swiss Patent No. 532,261), with Wolfgang Helfrich and Martin Schadt (who worked for the Central Research Laboratories) where they are listed as inventors. Hoffmann-La Roche, then licensed the invention, gave it to the Swiss manufacturer Brown, Boveri & Cie, which produced watch devices during the 1970’s, and also to the Japanese electronics industry that soon produced the first digital quartz wristwatch. with TN, LCD screens and many other products. James Fergason in Kent State University filed an identical patent in the US on April 22, 1971. In 1971 the Fergason company ILIXCO (now LXD Incorporated) produced the first LCD displays based on the TN effect, which soon replaced the poor quality DSM types due to improvements in lower operating voltages and lower power consumption. Energy.
  • 1972: The first liquid crystal active matrix display was produced in the United States by Peter T. Brody.

A detailed description of the origins and complex history of liquid crystal displays from an insider’s perspective from the earliest days has been published by Joseph A. Castellano in Liquid Gold, The Story of Liquid Crystal Displays and the Creation of an Industry

The same story seen from a different perspective has been described and published by Hiroshi Kawamoto (“The History of Liquid-Crystal Displays”, Proc. IEEE, Vol. 90, No. 4, April 2002), this document is available at public at the IEEE History Center.

Color in devices

In color LCD screens each individual pixel is divided into three cells, or sub-pixels, colored red, green, and blue, respectively, by increasing filters (pigment filters, tint filters, and metal oxide filters). Each sub-pixel can be independently controlled to produce thousands or millions of possible colors for each pixel. CRT monitors use the same ‘sub-pixel’ structure through the use of phosphor, although the analog electron beam used in CRTs does not give an exact number of sub-pixels.

Color components can be arranged in various pixel geometric shapes, depending on monitor usage. If the software knows what type of geometry is being used on a particular LCD, it can be used to increase the resolution of the monitor through sub-pixel display. This technique is especially useful for anti-aliasing text.

All You Need to Know About LCD

All You Need to Know About Hypertext Markup Language

All You Need to Know About Hypertext Markup Language

According to abbreviationfinder, Hypertext Markup Language is also known as HMTL, which uses markings to describe how text and graphics should appear in a web browser that, in turn, is prepared to read those markings and display the information in a standard format. Some web browsers include additional marks that can only be read and used by them, and not by other browsers. The use of non-standard marks in places of special importance is not recommended.


Html started as a simplification of something called SGML, Generalized Standard Markup Language, much more difficult to learn than HTML and generally used to display large amounts of data that must be published in different ways. The theory says that all marks are not just a formatting code for the text, but have their own meaning. Therefore, everything that can be used must be inside a mark with a meaning. To “read” an HTML page without a web browser, you will need to be familiar with some terms and concepts. The first of them is the source or source code, the way to name all the marks and the text that make up the HTML file. The font is what you will actually see when you use a text editor, and not a web browser, to view the HTML file.

A mark is the basic element of the code that formats the page and tells the browser how to display certain elements. Markups do not appear when Web Pages are displayed, but they are an essential part of HTML authoring. These symbols are essential, as they will indicate to the browser that it is an instruction, not text that should appear on the screen.

There are many brands that need what is called the end brand. Generally, it is the same brand, but with a backslash before its meaning. For example, the mark for the letter in Bold (Bold) is <B> and must be placed before the text on which it has to take effect, putting the closing mark </B> behind it. If it is not closed, there will never be parts of the document that are not displayed correctly or worse, the browser will crash due to incorrect syntax.

An attribute appears directly within a mark, between <> symbols. It modifies certain aspects of the brand and instructs the browser to display the information with additional special characteristics. Although the mark for using an image is <IMG>, they have a required attribute, SRC, that tells the browser where the graphic file can be found. It also has several optional attributes like HEIGHT, WIDTH, and ALIGN.

Most of the attributes are optional and allow you to bypass browser defaults and customize the appearance of certain elements.
Whenever a mark appears with a backslash, such as </B> or </HTML> it will be “closing” or ending the mark of that section. Not all marks have a closing mark (such as the image mark) and some of them are optional (such as <IP>).

An attribute usually has a value; expressed with the equal sign (=). If you use attributes with values, they are always put in quotes, unless they are numbers, which do not need them (although this is a good habit).

Browsers, compatibility

As we have said, the browser installed on the user’s computer is the one that interprets the HTML code of the page they visit, so it can sometimes happen that two users view the same page differently because they have different browsers installed or even different versions. from the same browser.

Today’s browsers claim to be compatible with the latest version of HTML. It is necessary to make extensions to the browsers so that they can be compatible with this latest version.

Two of the browsers that are continually making extensions are Internet Explorer and Netscape Navigator, which make extensions even before the standards are set, trying to include the new features included in the drafts.

Browsers have to be compatible with the latest HTML version in order to interpret as many tags as possible. If a browser does not recognize a tag, it ignores it and the effect that the tag intended is not reflected on the page.

To make the extensions of these browsers, new attributes are added to existing tags, or new tags are added.

As a result of these extensions, there will be pages whose code can be fully interpreted by all browsers, while others, by including new attributes or tags from the draft of the latest version of HTML, can only be fully interpreted in the most up-to-date browsers..
In the latter case, it may also happen that some tag on the page can only be interpreted by a specific browser, and another tag by a browser other than the previous one, so it would never be viewed in its entirety by any browser.

One of the challenges of web page designers is to make the pages more attractive using all the power of the HTML language but taking into account these compatibility problems so that the greatest number of Internet users see their pages as they have been designed.


An editor is a program that allows us to write documents. Today there are a large number of editors that allow you to create web pages without the need to write a single line of code [[HTML]. These editors have a visual environment and automatically generate the code for the pages. By being able to see at all times how the page will be in the browser, the creation of the pages is facilitated, and the use of menus allows to gain speed.

These visual editors can sometimes generate junk code, that is, code that is useless, on other occasions it may be more effective to correct the code directly, so it is necessary to know HTML to be able to debug the code of the pages.
Some of the visual editors with which you can create your web pages are Macromedia Dreamweaver, Microsoft Frontpage, Adobe Pagemill, NetObjects Fusion, CutePage, HotDog Proffesional, Netscape Composer and Arachnophilia, of which some have the advantage of being free.

In aulaClic you can find Macromedia Dreamweaver and Microsoft Frontpage courses, two of the most used editors today.
It is advisable to start using a tool that is as simple as possible, so that we have to insert the HTML code ourselves. This allows you to become familiar with the language, to be able to use a visual editor later, and to debug the code when necessary.

To create web pages by writing the HTML code directly, you can use any plain text editor such as Wordpad or Notepad in Windows, or the powerful Vim or Emacs editors in Unix and GNU / Linux environments mainly.


Tags or marks delimit each of the elements that make up an HTML document. There are two types of tags, the beginning of an element and the ending or closing of an element.

The start tag is delimited by the characters <and>. It is made up of the identifier or name of the tag, and it can contain a series of optional attributes that allow adding certain properties. Its syntax is: <identifier attribute1 attribute2…>
The attributes of the beginning tag follow a predefined syntax and can take any value of the user, or predefined HTML values.

The end tag is delimited by the characters </ and>. It is composed of the identifier or name of the tag, and does not contain attributes. Its syntax is: </identifier>

Each of the elements on the page will be found between a start tag and its corresponding end tag, with the exception of some elements that do not need an end tag. It is also possible to nest tags, that is, to insert tags between other beginning and ending tags.
It is important to nest the labels well, the labels cannot be ‘crossed’, in the example we start with the <p..> tag, before closing this tag we have put the <font..> so before closing the tag <p..> we must close the tag <font..> tag.

Hypertext Markup Language

All You Need to Know About CISC

All You Need to Know About CISC

According to abbreviationfinder, CISC stands for Complex Instruction Set Computer. In Spanish (Complex Instruction Set Computer). In it the processor brings hundreds of registers and many steps and clock cycles are needed to perform a single operation.

All PC-compatible x86 CPUs are CISC processors, but newer Macs or some with complex engineering drawings probably have a RISC (Reduced Instruction Set Computer) CPU.


The practical difference between CISC and RISC is that CISCx86 processors run on DOS, Windows 3.1, and Windows 95 in native mode; that is, without software translation that will degrade performance.

But CISC and RISC also reflect two rival computing philosophies. RISC processing requires short software instructions of the same length, which are easy to process quickly and in tandem by a CPU.

CISC technology, one of the oldest and most common, is not as efficient as RISC, but it is the most widespread since it was used from the beginning by Intel, the largest processor manufacturer in the world, like AMD, its competition. There are millions of programs written for CISC that do not run on RISC; the difference between both architectures has been shortened a lot with the increasing speed reached by the CISC processors, however a RISC processor half the speed of a CISC will work almost the same as the latter and in many cases much more efficiently.

Nowadays, the programs are increasingly large and complex, they demand greater speed in the processing of information, which implies the search for faster and more efficient microprocessors.

CPUs combine elements of both and are not easy to pigeonhole. For example, the Pentium Pro translates the long CISC instructions of the x86 architecture into simple fixed-length micro-operations that run on a RISC-style kernel. The UltraSparc-II of Sun, accelerates MPEG decoding with special instructions for graphics; These instructions obtain results that in other processors would require 48 instructions.

Therefore, in the short term, RISC CPUs and RISC-CISC hybrid microprocessors will coexist in the market, but with increasingly diffuse differences between the two technologies. In fact, future processors will fight on four fronts:

  • Execute more instructions per cycle.
  • Execute the instructions in a different order from the original so that interdependencies between successive operations do not affect processor performance.
  • Rename the registries to alleviate the scarcity of them.
  • Help accelerate overall system performance in addition to CPU speed.


Microprogramming is an important and essential feature of almost all CISC architecture. Such as: Intel 8086, 8088, 80286, 80386, 80486, Motorola 68000, 68010, 620, 8030, 684.

Microprogramming means that each machine instruction is interpreted by a microprogram located in memory on the processor’s integrated circuit. In the sixties, micro-production, due to its characteristics, was the most appropriate technique for the memory technologies that existed at that time, and it also made it possible to develop processors with upward compatibility. Consequently, the processors were endowed with powerful instruction sets.

Composite instructions are internally decoded and executed with a series of microinstructions stored in an internal ROM. This requires several clock cycles (at least one per microinstruction). The fundamental goal of the CISC architecture is to complete a task in as few assembly lines as possible. This is achieved by building processor hardware that is capable of understanding and executing a series of operations.

For this particular task, a CISC processor would come prepared with a specific instruction called MULT. When this instruction is executed, it loads the two values ​​into separate registers, multiplies the operands in the unit of execution, and then stores the product in the appropriate register. Thus, the entire task of multiplying two numbers can be completed with one instruction.

MULT 2: 3, 5: 2

MULT is what is known as “complex instruction.


It runs directly into computer memory banks and does not require the programmer to explicitly call any loading or storing functions. It closely resembles a command in a high-level language. For example, if we let “a” represent the value of 2: 3 and “b” represent the value of 5: 2, then this command is identical to the declaration of C “a = a * B.”

One of the primary benefits of this system is that the compiler has to do very little work to translate a high-level language statement to the assembly. Because the length of the code is relatively short, very little RAM is required to store instructions. The emphasis is placed on building complex instructions directly on the hardware.

CISC does not represent a processor architecture proposal in the usual sense. CISC reflects the way in which they were developed and the improvements that had been made to processor architectures until about 1975. CISC, is the Computer with a Complex Instruction Set (Complex Instruction Set Computer), represents the name of the main current developed in computer architecture and, perhaps, we could understand that it is the name that was assigned to the tendency to which the movement RISC was opposed.


All You Need to Know About Event Handler and WebAssembly

All You Need to Know About Event Handler and WebAssembly

What is an event handler?

With an event handler, a software developer can control exactly what should happen in the program when a certain event occurs. The triggering events can have different origins, but they are often triggered by the interaction of the user.

Events occur in a wide variety of places in software, e.g. For example, when a user clicks a button, moves the mouse pointer or writes something in a text field. Event handlers have the task of recognizing these events and then executing a predetermined action, e.g. B. to temporarily store the content of a text field as soon as the user changes it.

Event oriented software

Programs that do not always run linearly according to the same scheme and react to inputs and behavior of the user – or, more generally, to changes in state – work with events. This means that at some points in the program it is expected that a certain event could occur.

Accordingly, developers must provide a way in which the program code can deal with these events. For this purpose, it must first be monitored in the program code whether and when a predetermined event occurs. If it is determined that such an event has been triggered, the associated functionality or routine can then be executed.

Interface between code and events

Event handlers are most commonly used to create a connection between elements from the graphical user interface and the code in the background. So programmers have the possibility to react directly to the input of the user or to other events. In addition, working with event handlers allows programs that can react spontaneously and dynamically to events instead of actively waiting for a specific event and thus blocking resources.


  • The validation for a password field in a form is only triggered when the user has entered something in the associated text field and then left the field again.
  • Only when a user clicks the “Select Date” button should the program display a control for selecting the date.
  • While the user is already entering text in a text field, each new character is checked to determine whether the content exceeds the maximum number of characters allowed.
  • When a user enters a predetermined key or key combination, a predetermined event is to be triggered, e.g. B. the change to a certain view.

What is WebAssembly?

WebAssembly is intended as a supplement to Javascript in the web browser. It’s a byte code that is supposed to help increase the speed of web applications. However, the system still has certain weaknesses.

WebAssembly, or Wasm for short, has the task of eliminating some of the shortcomings of Javascript . This language is actually too messy and has too soft a syntax for the central role it has played in web development for quite some time. It is not sufficient for modern performance requirements.

On a small scale, the developers have with emscripten resorted. C programs are compiled and executed in a virtual machine of the browser. On the whole, however, the whole thing does not work because Javascript cannot offer the necessary performance.

WebAssembly is designed to solve exactly these problems. It is a bytecode that constitutes an alternate virtual machine that has sufficient performance.

WebAssembly simply explained

The easiest way to explain what Wasm does is with a metaphor: A hybrid vehicle has an electric motor and a petrol unit. The electric motor is the normal Javascript environment of the browser. The engine is completely sufficient for everyday tasks. However, he has performance limits. In such moments, the gasoline engine (the Wasm machine) must be switched on in order to eliminate this. The following things must be taken into account:

  • WebAssembly and the Javascript environment need to be able to communicate with each other to determine when the additional power is needed.
  • Wasm provides so-called “performance islands” and does not continuously provide additional services. In practice, the code compiled for WebAssembly is loaded and executed for this in the form of modules.
  • So that the modules can be generated, compilers are required that understand and can process the Wasm format.
  • Such compilers are available for C, C ++, Rust, Blazor and Clang, for example.
  • The compilers must ensure that handovers can take place in both directions. To stay in the picture: the petrol engine must be able to indicate when its power is no longer needed and not just receive commands.

The advantages of WebAssembly

All major browser developers (Microsoft, Google, Apple, Mozilla, Opera) are driving the development of WebAssembly as part of the “Open Web” platform. Since 2017, all popular browsers have supported the associated modules, with the exception of Internet Explorer, which has now been discontinued. The following advantages have been shown:

  • Web applications load faster.
  • The corresponding apps work more efficiently.
  • Wasm environments are safe because they run in a sandbox.
  • The bytecode is openly accessible and can be used accordingly for development and debugging.
  • In theory, it is even possible to use WebAssembly as a virtual machine outside of browsers. An interface called WASI should also make this possible in practice.

WebAssembly problems

One major problem WebAssembly struggles with is the difficulty of implementation . At the moment the Wasm modules can only be loaded in the browser by either manually programming a separate JavaScript loader or by using the “shell page” generated by Emscripts.

The latter, however, is difficult and sometimes impossible: Many of the commands in the standard library require a supporting infrastructure in runtime , which, however, is often not available. To take up the above metaphor again: The user currently still has to ensure that the electric motor explains to the gasoline engine what is required every time. Unfortunately, he does not understand some commands, which therefore have to be explained again and again (or saved using a new library that you can create yourself).

The developers definitely have a solution for this problem. script tags should help to load the modules in the future. So far, however, this approach has not had any noticeable success.

What is WebAssembly

All You Need to Know About K Desktop Environment

All You Need to Know About K Desktop Environment

K Desktop Environment, abbreviated as KDE by abbreviationfinder, has its own input / output system called KIO, which can access a local file, a network resource (through protocols such as HTTP, FTP, NFS, SMB, etc.), or virtual protocols (Camera of photos, compressed file, etc.) with absolute transparency, benefiting every KDE application. KIO’s modular architecture allows developers to add new protocols without requiring modifications to the base of the system.

Finally, (KParts) allows you to include applications within others, thus avoiding code redundancy throughout the system. Additionally, it has its own HTML engine called KHTML, which is being reused and expanded by Apple (to create its Safari browser), and by Nokia.

KDE Project

As the project history shows, the KDE team releases new versions in short periods of time. They’re renowned for sticking to release plans, and it’s rare for a release to be more than two weeks late.
One exception was KDE 3.1, which was delayed for over a month due to a number of security-related issues in the codebase. Keeping strict release plans on a voluntary project of this size is unusual.

Major releases

Major Releases
Date Launching
July 12, 1998 KDE 1.0
February 6, 1999 KDE 1.1
October 23, 2000 KDE 2.0
February 26, 2001 KDE 2.1
August 15, 2001 KDE 2.2
April 3, 2002 KDE 3.0
January 28, 2003 KDE 3.1
February 3, 2004 KDE 3.2
August 19, 2004 KDE 3.3
March 16, 2005 KDE 3.4
November 29, 2005 KDE 3.5
January 11, 2008 KDE 4.0
July 29, 2008 KDE 4.1
January 27, 2009 KDE 4.2
August 4, 2009 < KDE 4.3
February 9, 2010 KDE 4.4

A major release of KDE has two version numbers (for example KDE 1.1). Only the major releases of KDE incorporate new functionality. There have been 16 major releases so far: 1.0, 1.1, 2.0, 2.1, 2.2, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 4.0, 4.1, 4.2, 4.3, and 4.4. All releases with the same major version number (KDE 1, KDE 2, KDE 3, and KDE 4) are supported in both binary code and source code. This means, for example, that any software developed in KDE 4.2.X will work with all KDE 4 releases.

Except during major version changes, alterations never occur with requirements for recompiling or modifying the source code. This maintains a stable API (Application Programming Interface) for KDE application developers. The changes between KDE 1 and KDE 2 were large and numerous, while the API changes between KDE 2 and KDE 3 were comparatively minor. This means that applications can be easily transformed to the new architecture.

Major version changes to KDE are intended to follow those of the Qt Library, which is also under constant development. So, for example, KDE 3.1 requires Qt ≥ 3.1 and KDE 3.2 requires Qt ≥ 3.2. However, KDE 4.0 requires Qt ≥ 4.3 and KDE 4.1 requires Qt ≥ 4.4.

As soon as a major release is ready and announced, it is added to the svn repository “branch”, while work on the next major release begins on the main one (trunk). A major release takes several months to complete, and many bugs found during this stage are removed from the stable branch as well.

Minor releases

Less separate release dates are scheduled for minor releases. A minor KDE release has three version numbers (for example KDE 1.1.1) and the developers focus on fixing bugs and improving minor aspects of the programs instead of adding functionality.


  • KDE was criticized in its early days because the library on which it is developed (Qt), despite following a development based on open source, was not free. The 4 as September as 2000, the Library began distributing licensed under the GPL 1 and criticisms were gradually ceasing. Currently, and since version 4.5, the library is additionally available under LGPL 2.1.
  • Some outsiders criticize KDE’s similarity to the Windows Desktop Environment. This observation, however, falls on the selection of predefined parameters of the environment; often aimed at making it easier to use for new users, most of whom are used to working with Microsoft operating systems. However, KDE has a high configuration capacity and in its branch 4 it has desktop effects integrated in Plasma and KWin, comparable to those of Compiz.

Project Organization

Featured collaborators
Function Name Origin
Graphic designer
Everaldo Coelho Brazil
Nuno pinheiro Portugal
Programmer Aaron Seigo Canada
David faure
Duncan Mac-Vicar Prett chili
Dirk mueller
Eva brucherseifer
George staikos
Lars Knoll
Matthias ettrich

Like many other free projects, KDE is built primarily through the effort of volunteers. Since several hundred individuals contribute to KDE in various ways (programming, translating, producing art, etc.), organizing the project is complex. Most of the issues are discussed on the different project mailing lists.

Contrary to what you might think of such a large project, KDE does not have a centralized leadership; Matthias Ettrich, the founder of the KDE project, does not have much weight over the decisions and direction of the KDE project. Important decisions, such as release dates or inclusion of new applications, are made by the main developers on a restricted mailing list. Lead developers are those who have contributed to KDE for a long time. Decisions are not made in a formal voting process, but rather through discussion on the mailing lists. Generally this method works very well.

Any user is welcome to report bugs he has found in the Software (“bug”). It is also possible to make requests about new functionalities (“wish”). It is enough to communicate it, in English, on the website enabled for it: KDE Bug Tracking Syst en. In legal and financial matters, the KDE Project is represented by the KDE eV, a German non-profit organization.

KDE - K Desktop Environment

All You Need to Know About SSL

All You Need to Know About SSL

SSL mail server. (from the English Secure Socket Layer listed on abbreviationfinder). It provides security services by encrypting the data exchanged between the server and the client with an algorithm symmetric encryption, typically the RC4 or IDEA, and encrypting the session key RC4 or IDEA using an encryption algorithm public key, typically the RSA.


Secure Socket Layer is a set of protocols general character designed in 1994 by the company Nestcape communcations Corporation, and is based on the combined application of Symmetric Cryptography, Cryptography Asymmetric (public key), digital certificates and digital signatures for a channel or secure means of communication through the Internet.

Symmetric cryptographic systems, the main engine of encryption of data transferred in communication, take advantage of the speed of operation, while asymmetric systems are used for the secure exchange of symmetric keys, thereby solving the problem of Confidentiality. in data transmission.


When you connect to a secure server (https: // www…), browsers notify you of this circumstance by means of a yellow padlock in the lower part and also allow you to check the information contained in the digital certificate that enables it as a secure server. SSL allows you to collect data such as credit card information, etc. in a secure environment since the information sent through a secure form is transmitted to the server in an encrypted form.

SSL implements a negotiation protocol to establish a secure communication at the level of socked (machine name plus port), in a transparent way to the user and the applications that use it.

When the client asks the secure server for secure communication, the server opens an encrypted port, managed by software called the SSL Record Protocol, located on top of TCP. It will be the high-level software, SSL Handshake Protocol, who uses the SSL Record Protocol and the open port to communicate securely with the client.

The SSL Handshake Protocol

During the SSL Handshake protocol, the client and the server exchange a series of messages to negotiate security enhancements. This protocol follows the following six phases (very briefly):

  • The Hello phase, used to agree on the set of algorithms for maintaining privacy and for authentication.
  • The key exchange phase, in which you exchange information about the keys, so that in the end both parties share a master key.
  • The session key production phase, which will be used to encrypt the exchanged data.
  • The Server Verification phase, present only when RSA is used as the key exchange algorithm, and serves for the client to authenticate the server.
  • The Client Authentication phase, in which the server requests an X.509 certificate from the client (if client authentication is required).
  • Finally, the end phase, which indicates that the secure session can now begin.

The SSL Record Protocol

The SSL Record Protocol specifies how to encapsulate transmitted and received data. The data portion of the protocol has three components:

  • MAC-DATA, the authentication code of the message.
  • ACTUAL-DATA, the application data to be transmitted.
  • PADDING-DATA, the data required to fill the message when using block encryption.

General characteristics

  • SSL implements a negotiation protocol to establish a secure communication at the level of socked (machine name plus port), in a transparent way to the user and the applications that use it.
  • The identity of the secure web server (and sometimes also of the client user) is obtained through the corresponding Digital Certificate, whose validity is checked before starting the exchange of sensitive data (Authentication), while the security of data integrity exchanged, the Digital Signature is in charge by means of hash functions and the verification of summaries of all the data sent and received.
  • SSL provides security services to the protocol stack, encrypting outgoing data from the Application layer before it is segmented at the Transport layer and encapsulated and sent by the lower layers. Furthermore, it can also apply compression algorithms to the data to be sent and fragment the blocks larger than 214 bytes, reassembling them at the receiver.
  • The identity of the secure web server (and sometimes also of the client user) is obtained through the corresponding Digital Certificate, whose validity is checked before starting the exchange of sensitive data (Authentication), while the security of data integrity exchanged, the Digital Signature is in charge by means of hash functions and the verification of summaries of all the data sent and received.
  • Its implementation in the OSI and TCP / IP reference models, SSL is introduced as a kind of additional layer or layer, located between the Application layer and the Transport layer, replacing the operating system sockets, which makes it independent. application that uses it, and is generally implemented on port 443. (NOTE: The ports are the interfaces between the applications and the TCP / IP protocol stack of the operating system).


The most current version of SSL is 3.0. It uses the symmetric DES, TRIPLE DES, RC2, RC4 and IDEA encryption algorithms, the asymmetric RSA, the MD5 hash function and the SHA-1 signature algorithm.


All You Need to Know About LDAP

All You Need to Know About LDAP

Short for LDAP according to abbreviationfinder, Lightweight Directory Access Protocol is an application-level protocol that allows access to an ordered and distributed directory service to search for various information in a network environment. LDAP is also considered a Database that can be queried. It is based on the X.500 standard. Usually, it stores authentication information (user and password) and is used to authenticate, although it is possible to store other information (user contact data, location of various network resources, permissions, certificates, among other data). LDAP is a unified access protocol to a set of information on a network.


Telecommunications companies introduced the concept of directory services to Information Technology and Computer Networks, thus their understanding of directory requirements was well developed after 70 years of producing and managing telephone directories. The culmination of this effort was the X.500 specification, a set of protocols produced by the International Telecommunication Union (ITU) in the 1980s.

X.500 directory services were traditionally accessed via DAP (Directory Access Protocol), which required the OSI (Open Systems Interconnection) protocol stack. LDAP was originally intended to be a lightweight, alternative protocol for accessing X.500 directory services over the simpler (and now more widespread) TCP / IP protocol stack. This directory access model was imitated from the DIXIE Directory Assistance Serviceprotocols.

Standalone LDAP directory servers were soon implemented, as well as directory servers that supported DAP and LDAP. The latter became popular in enterprises because it eliminated any need to deploy an OSI network. Now, the X.500 directory protocols including DAP can be used directly over TCP / IP.

The protocol was originally created by Tim Howes (University of Michigan), Steve Kille (Isode Limited), and Wengyik Yeong (Performance Systems International) around 1993. A more complete development has been done by the Internet Engineering Task Force.

In the early stages of LDAP engineering, it was known as the Lightweight Directory Browsing Protocol, or LDBP. It was later renamed as the scope of the protocol had been expanded to include not only directory browsing and search functions, but also directory update functions.

LDAP has influenced later Internet protocols, including later versions of X.500, XML Enabled Directory (XED), Directory Service Markup Language (DSML), Service Provisioning Markup Language (SPML), and Service Location Protocol (SLP).

Characteristics of an LDAP directory

  • It is very fast in reading logs.
  • It allows to replicate the server in a very simple and economical way. Many * applications of all kinds have connection interfaces to LDAP and can be easily integrated.
  • Use a hierarchical information storage system.
  • It allows multiple independent directories.
  • It works over TCP / IP and SSL (Secure Socket Layer).
  • Most applications have LDAP support.
  • Most LDAP servers are easy to install, maintain, and optimize.
  • Given the characteristics of an LDAP, its most common uses are:
  • Information directories.
  • Centralized authentication / authorization systems.
  • Email Systems.
  • Public certificate servers and security keys.
  • Single authentication or SSO for application customization.
  • Centralized user profiles.
  • Shared address books.

The Advantages of LDAP Directories

Now that we have “straightened out”, what are the advantages of LDAP directories? The current popularity of LDAP is the culmination of a number of factors. I’ll give you a few basic reasons, as long as you keep in mind that this is only part of the story.

Perhaps the greatest advantage of LDAP is that your company can access the LDAP directory from almost any computing platform, from any of the growing number of applications readily available for LDAP. It’s also easy to customize your internal company applications to add LDAP support.

The LDAP protocol is cross-platform and standards-based, so applications do not need to worry about the type of server the directory is hosted on. In fact, LDAP is finding much wider acceptance because of its status as an Internet standard. Vendors are more keen to code for LDAP integration into their products because they don’t have to worry about what’s on the other side. Your LDAP server can be any of a number of commercial or open source LDAP directory servers (or even a DBMS server with an LDAP interface), since interacting with any true LDAP server carries the same protocol, client connection packet, and query commands. By contrast,

Unlike relational databases, you don’t have to pay for each client software connection or license.

Most LDAP servers are simple to install, easily maintainable, and easily tuneable.

LDAP servers can replicate some of your data as well as all of it through sending or receiving methods, allowing you to send data to remote offices, increase your security, and more. Replication technology is built in and easy to configure. By contrast, many of the DBMS vendors charge extra for this feature, and it is considerably more difficult to manage.

LDAP allows you to safely delegate authorization-based reading and modification according to your needs using ACIs (collectively, an ACL, or Access Control List). For example, your group of facilities can give access to change the location of employees, their cubicle, or office number, but it is not allowed to modify entries in any other field. ACIs can control access depending on who is requesting the data, what data is being requested, where the data is stored, and other aspects of the record that is being modified. All of this done directly through the LDAP directory, so you don’t need to worry about doing security checks at the user application level.

LDAP is particularly useful for storing information that you want to read from many locations, but that is not frequently updated. For example, your company could store all of the following data in an LDAP directory:

  • The company’s employee phone book and organizational chart
  • External customer contact information
  • Information on the service infrastructure, including NIS maps, email aliases, and more
  • Configuration information for distributed software packages
  • Public certificates and security keys

Using LDAP to store data

Most LDAP servers are heavily optimized for read intensive operations. Because of this, one can typically see a different order of magnitude when reading data from an LDAP directory versus getting the same data from an OLTP-optimized relational database. However, because of this optimization, most LDAP directories do not do well with storing data where changes are frequent. For example, an LDAP directory server is good for storing your company’s internal phone book, but don’t even think about using it as a database repository for a high-volume e-commerce site.

If the answer to each of the following questions is Yes, then storing your data in LDAP is a good idea.

  • Would you like your data to be available through various platforms?
  • Do you need access to this data from a number of computers or applications?
  • The individual records that you are stored change a few times a day or less, as measured?
  • Does it make sense to store this type of data in a flat database instead of a relational database? That is, can you store all the data, for a given item, effectively in a single record?

This final question often makes people pause, because it is very common to access a plain register to obtain data that is relational in nature. For example, a record for a company employee might include the login name of that employee’s manager. It is good to use LDAP to store this type of information. The cotton test: If you can imagine all your data stored on an electronic Rodolex, then you can easily store your data in an LDAP directory.


All You Need to Know About ESM

All You Need to Know About ESM

ESM: European stability mechanism

European Stability Mechanism, abbreviated as ESM by abbreviationfinder, is a set of instruments intended to guarantee the stability of the European Economic and Monetary Union as a whole.

The first version of the “Treaty establishing the European Stability Mechanism” was signed by the 17 member states of the Eurozone on July 11, 2011 and a modified version on February 2, 2012. The ESM replaced the euro rescue package that was set up for a limited period in May 2010. – In 2018 it belonged to 19 EU member states.

Temporary rescue package: The euro rescue package was set up in May 2010 with the aim of averting the impending insolvency of Greece and thus cushioning the effects of a failure of this member state (pan-European bank rescue operations after full suspension of payments) on the euro zone. The rescue parachute as it was on 9/10 5. was adopted in 2010, comprises a guarantee volume of € 750 billion and is composed as follows:

  • The European Financial Stabilization Mechanism (abbreviation EFSM), established as a community instrument of the EU, provides € 60 billion from the EU budget. It is no longer applicable when the ESM comes into force at the latest.
  • The European Financial Stability Facility (abbreviation EFSF), set up as an intergovernmental instrument, contributes € 440 billion to the guaranteed volume of the rescue package. The EFSF founded as a special purpose vehicle (seat: Luxembourg) can grant the country concerned loans at significantly lower interest rates than the indebted country would have to pay on the free capital market. The EFSF refinances these loans on the capital market, for which all member states of the euro zone must be liable in accordance with their capital stake in the ECB. A prerequisite for the granting of loans is that the state concerned accepts a financial and economic recovery program aimed at restoring financial stability.
  • The IMF provides up to € 250 billion.

At summit meetings in March and July 2011, the heads of state and government of the euro zone decided to modify the EFSF: The guarantee framework of the euro countries was increased so that they can obtain the total amount of € 440 billion as cheaply as possible (AAA rating) on ​​the capital market € 780 billion increased. After this expansion, Germany will have to provide guarantees in the amount of € 211 billion. In addition, the competencies for the EFSF (and its successor, the ESM) have been expanded: Under certain conditions there is the possibility to intervene in the primary market for government bonds and to buy up old government debt on the secondary markets. Furthermore, a member state can, as it were, receive loans »preventively«, ie before the (imminent) occurrence of insolvency. Both instruments presuppose the existence or Adoption of austerity and reform programs in the affected states. Finally, states can obtain loans to recapitalize their financial institutions. This also applies to countries that are stable, but whose banks are threatened by the high debt of another country in the euro zone.

The Bundestag approved these extensions on September 29, 2011 after a controversial debate. 523 MPs voted for the bill, 85 voted no and 3 abstained. Constitutional complaints, including The Federal Constitutional Court had already dismissed the question of the extent to which the euro rescue package contradicts national or European law in its judgment of September 7, 2011. At the same time, however, the court obliged the federal government to obtain the approval of the Bundestag’s budget committee before every step in future rescue measures.

Another summit in October 2011 decided, among other things, to use a so-called credit lever to optimize the EFSF’s resources so that it is expected to have 1 trillion euros at its disposal.

Permanent protection and emergency aid mechanism: As the debt crisis in the euro zone continued, the European Council decided in December 2010 to set up a permanent institutional protection and emergency aid mechanism and to expand Article 136 TFEU as follows: »The member states whose currency is the euro can set up a stability mechanism, which is activated when it is absolutely necessary to preserve the stability of the euro area as a whole. The granting of all necessary financial assistance within the framework of the mechanism will be subject to strict conditions. «This treaty change does not affect the no-bail-out clause (non-bailout clause), but it does allow support if the stability of the euro area as a whole is jeopardized. All members of the Eurozone belong to the ESM (Articles 1 and 2 of the Treaty establishing the European Stability Mechanism), and its seat is in Luxembourg (Article 31, Paragraph 1). Voting members of the Board of Governors, which also has to decide on the granting of loans, are the members of the national governments of the member states responsible for finances (Article 5, paragraph 1). The member of the European Commission responsible for economic and monetary affairs, the President of the ECB and the President of the Eurogroup, provided that he is not a voting member of the Board of Governors, may also attend its meetings (Article 5, Paragraph 3). Decisions on financial aid from the ESM do not necessarily require unanimity; under certain circumstances, according to Article 4, it is sufficient Paragraph 4, a qualified majority of 85% of the votes cast (since the voting rights are based on the contribution to the share capital, Germany, France and Italy, which each hold more than 15% of the ESM shares, retain a right of veto). With regard to the capital structure, the ESM draws from three sources:

  • € 80 billion are direct deposits made by the member states of the euro area.
  • The member states guarantee a total financial volume of € 620 billion, of which a maximum of € 420 billion will be disbursed as loans. The difference in the guaranteed amount is aimed at ensuring that ESM bonds receive a very good creditworthiness (AAA rating) on ​​the capital markets and that a lower interest rate is paid for them.
  • The IMF continues to participate with a loan amount of € 250 billion.

The instruments available to the ESM are similar to those of the EFSF (in addition to lending, precautionary credit lines, recapitalization of financial institutions and primary and secondary market interventions).

A member state can only receive support from the ESM if it is indispensable for the stability of the euro area as a whole. Generally it is granted in the form of loans. The board of directors must unanimously decide to grant the loan; it must be preceded by a debt sustainability analysis. The support is linked to strict requirements for the restructuring of public finances.

In principle, the member states of the euro zone are also involved in the financing of the ESM in accordance with their capital share in the ECB. According to the contribution key, the German share is 27.1464%; in terms of ESM capital to be paid in, this amounts to € 21.72 billion; Germany must provide guarantees of € 168.3 billion in terms of callable capital.

Fiscal pact and further stabilization measures: At an EU summit in December 2011, the heads of state and government of the EU member states, with the exception of Great Britain, agreed to embark on the path to a fiscal policy union (Stability and Growth Pact). With the “Treaty on Stability, Coordination and Governance in Economic and Monetary Union” (“Fiscal Treaty” for short, also “Fiscal Compact”), which all EU member states with the exception of Great Britain and the Czech Republic signed on March 2, 2012, In essence, the contracting states undertake to implement debt brakes (the structurally conditioned, cyclical new debt must not exceed 0.5% of GDP) in national law with strong binding force (preferably at constitutional level) and accept automatic sanctions if the new rules on budget discipline are not observed (the automatism can only be stopped by an express majority vote of the contracting states). The Fiscal Compact came into force on January 1, 2013.

ESM European Stability Mechanism

All You Need to Know About EEC

All You Need to Know About EEC

European Economic Community, abbreviated as EEC by abbreviationfinder, English European Economic Community [j ʊ ərə pi ː ən i ː kə n ɔ m ɪ k kə mju ː n ɪ t ɪ ], abbreviation EEC [i ː i ː si ː ], French Communauté Économique Européenne [k ɔ myno te ek ɔ n ɔmik ør ɔ pe εn], abbreviation CEE [seə e], by the Treaty of Rome (EEC Treaty, the Treaty of Rome), signed on 25. 3. 1957 between Belgium, the Federal Republic of Germany, France, Italy, Luxembourg and the Netherlands established a supranational community for the purpose of economic integration. The EEC Treaty came into force on January 1, 1958 and is valid for an unlimited period. With the Treaty establishing the European Union, whichcame into force on November 1, 1993, the EEC became the European Community (EC) renamed to document that their goals go beyond mere economic integration. With the entry into force of the Lisbon Treaty on December 1, 2009, the EC was transferred to the EU.

The establishment of the common market (European internal market) can be seen as the first stage of integration; The second stage is an economic policy carried out according to uniform criteria, which is intended to accompany the third stage of integration, the monetary union (European Economic and Monetary Union, abbreviation EMU) with a common currency and monetary policy. From July 1, 1967 to November 30, 2009, the EEC / EC was part of the European Communities (EG). It was also their most important sub-organization, as it was not limited to certain economic areas. By joining Denmark, Great Britain and Ireland on January 1, 1973, Greece on January 1, 1981, Spain and Portugal on January 1, 1986, Finland, Austria and Sweden on January 1, 1995 as well as Estonia, Latvia and Lithuania, Malta, Poland, the Slovak Republic, Slovenia, the Czech Republic, Hungary and Cyprus on May 1, 2004, and Bulgaria and Romania on January 1, 2007, the EC had grown considerably in economic and political importance.

European banking union

Banking union, banking union, English European Banking Union [j ʊ ərə pi ː ən bæ ŋ k ɪ ŋ ju ː njən],Term that describes various measures to not only support the European banking sector in the short term, but also to restore its stability in the long term. The aim is to fundamentally strengthen the European Economic and Monetary Union. Following on from the Commission’s proposals, a general distinction is made between three pillars of the European Banking Union: 1. common banking supervision, 2. common rules for the resolution of banks, and 3. common deposit insurance, which is intended to replace the national guarantee systems.

Member states of the banking union are all countries of the euro zone; the member states of the EU that do not belong to the euro zone can join voluntarily.

European banking union

Banking union, banking union, English European Banking Union [j ʊ ərə pi ː ən bæ ŋ k ɪ ŋ ju ː njən], designated term, the various measures to the European banking sector to support not only short term but long term, its stability restore. The aim is tofundamentally strengthenthe European Economic and Monetary Union. Building on the Commission’s proposals, a general distinction is made between three pillars of the European Banking Union:

  1. a common banking regulator;
  2. common rules for the resolution of banks;
  3. a common deposit insurance.

Member states of the banking union are all countries of the euro zone; the member states of the EU that do not belong to the euro zone can join voluntarily.

Common Banking Supervision: In 2013, the negotiators of the European Parliament, the Council and the Commission agreed on a legal basis for a common banking supervision, the Unified banking supervision mechanism (English Single Supervisory Mechanism, abbreviation SSM). Then one takes over at the European Central Bank (ECB) in the participating countries controls every bank with total assets of more than € 30 billion or more than 20% of the economic strength of its home country (»systemically important banks«). Regardless of these conditions, the ECB directly controls at least the three most important banks in each participating country. After the differences between the ECB and the European Parliament on accountability had been resolved, Parliament passed the Banking Supervision Act on September 12, 2013. On October 15, 2013, the EU finance ministers finally approved this legal basis. A supervisory body to which each participating country sends a representative is responsible for the tasks of banking supervision. If the Governing Council does not accept the Supervisory Board’s proposals, it is up to a mediation committee to resolve these disputes. This is to ensure that the final decision does not lie with the Governing Council and that monetary policy responsibility and supervision are clearly separated. The banking supervisory authority began its work on November 4, 2014. In preparation for this, a comprehensive assessment took place, the results of which the ECB published on October 26, 2014 (Stress test).

EEC European Economic Community

All You Need to Know About European Union

All You Need to Know About European Union

Environmental policy has a shorter history in the EU than cooperation on business and industry. But despite a late start, it has been a success. Every decision on tightening leads to improvements in so many countries that the overall effect is great.

Through the Amsterdam Treaty of 1999, the principle of sustainable development became a fundamental goal of the EU. This means that in all decisions made at EU level, environmental considerations must be taken into account and any environmental impact examined.

Two principles in the environmental field must be guiding. One is the precautionary principle, which states that if a product is not proven harmless, it should not be approved. The other is called “the polluter pays” and means that the company, industry or industry that causes environmental damage must bear the costs.

Short for European Union by abbreviationfinder, the EU sets multi-annual environmental programs. The current program runs until 2020 and has three main objectives: protecting and conserving the EU’s natural resources, making the EU a resource-efficient, green and competitive ‘low-carbon economy’, and protecting EU citizens from environmental and health hazards. One of the larger projects is to make the economy “circular”, which means to take in environmental effects already when designing a product or service, in the form of environmentally friendly materials, packaging requirements, final recycling, etc.

The EU’s success in environmental policy includes the regulation of chemicals, which forced a review of all chemicals on the market. The most dangerous ones were banned immediately, in any case there is a less dangerous alternative, the more dangerous versions are banned and all other chemicals must be registered so that the use can be documented.

The department’s mixed success includes the protection of biodiversity. About 20% of the EU’s area has been set aside as natural areas where flora and fauna are to be protected. Despite this, work is too slow to save many plant and animal species from extinction.

The failures of environmental policy must include the fact that emissions from the transport sector have not been able to be reduced despite EU legislation, but on the contrary have increased since 1990. This applies to both road transport and aviation.

The EU has also regulated water and air quality, adopted a strategy for waste management and is working against disturbing noise.

The climate

In 2006, the EU adopted the world’s toughest package of measures to reduce climate change. In 2016, the EU had already exceeded the first of its four climate targets for 2020 – to reduce carbon dioxide emissions by 20 percent compared to 1990 (the reduction was then 23 percent). The target has now been raised to a 40 percent reduction by 2030.

The second goal also seems to be realized; that 20% of the energy consumed in the EU should be generated from renewable energy sources. The EU used 17 percent renewable energy in 2017. The new goal will be to reach 27 percent by 2030.
The third goal – to get the vehicle fleet to run on 10 percent renewable fuels – is possibly within reach thanks to the popularity of electric cars (7.1 percent in 2016).
The fourth goal is also considered to be entirely possible; to reduce energy consumption by 20 percent compared to 2005. In 2016, EU countries had saved more energy than the set target (2 percent over), but a couple of cold winters lowered that result somewhat.

The target for 2030 will be 30 percent.

Within the UN circle, the EU has had mixed success in getting the rest of the world to agree on a global climate agreement. The agreement in Paris in 2015 won many supporters, including the largest emitting countries, the United States and China, but in 2018, US President Donald Trump decided that the United States will not implement its commitment.

However, researchers believe that the Paris Agreement has given too little and come too late to meet the UN’s ambition to keep global warming below 2 degrees. Despite this, the UN Climate Panel said in the autumn of 2018 that the world can still save the situation, but then many, new measures must be taken and it must happen quickly.


Energy is a hot political issue in several ways. It is about securing access to energy in a world where resources are declining and demand is increasing. It is also about the climate because oil, coal and natural gas are responsible for the largest emissions of greenhouse gases. Finally, security policy is affected because EU countries have to import 54 percent of their energy, mainly from Russia and the Middle East.

The EU has long sought to create a common energy market. Gradual deregulation in the mid-2000’s, for example, forced dominant energy companies to open up their networks to competitors and gave customers the right to buy energy from non-domestic producers.
It was not enough to create a cohesive market. In Denmark, for example, people are still forced to close their wind turbines during particularly windy days. The surplus electricity should be able to be sold to northern Germany, where coal-fired power plants are operating at full capacity, but there is a lack of lines between the countries.

The EU has therefore directed regional aid and special energy allocations to infrastructure projects that connect the countries’ energy connections. New lines now link, for example, Sweden, the Baltics, Poland and Germany, while others connect Spain and France.

With the Treaty of Lisbon, energy policy became more of a common concern for EU countries. With the support of this, EU leaders gathered in 2015 for a decision on a European Energy Union. More energy networks are being expanded across borders, new technology enables smarter energy use and competition between energy companies has intensified through clearer pricing. In addition, the member states have committed themselves to help a country in solidarity in the event of an acute energy crisis.

However, energy remains a shared competence between the EU and the member states. The states retain control over important parts of energy policy such as energy taxes, the right to decide which energy sources they want (nuclear power is rejected in some countries and appreciated in others) as well as the right to conclude their own energy agreements with third countries.

The European Commission has tried to have the last word on energy agreements with third countries, in order to be able to slow down agreements that increase the EU’s dependence on imports. But EU countries have only agreed to inform Brussels in advance.

All You Need to Know About European Union

Meanings of SWOT Analysis Part III

Meanings of SWOT Analysis Part III

Example of a SWOT analysis

A clear SWOT analysis example is the case of a fictitious library, whereby the objective for the sake of simplicity is simply the determination of any need for action. One of the strengths is undoubtedly the high level of information and media skills of the employees. Weaknesses, on the other hand, are the lack of innovation in such an institution and the associated rather old-fashioned image, as well as low financial resources.

Opportunities, on the other hand, are the general appreciation of a library as a place of learning, especially against the background of the increasingly important lifelong learning. In addition, individualization represents an opportunity to develop a monopoly position. Risks, on the other hand, arise from the universally accessible internet offer and, above all, from the diverse range of eBooks, which means that libraries generally enjoy added value.

An example of an avoidance strategy (weaknesses combined with risks) would be the implementation of a fundraising event to purchase a large internet area that is accessible to all – especially with introductory courses for senior citizens. This acts on the weaknesses mentioned and effectively cushions the risks mentioned.

Tools for SWOT analysis

Nowadays there are a number of aids and tools that serve as a template for a SWOT analysis or carry it out in general. An example of this is the publication “The TOWS Matrix – A Tool for Situational Analysis” by Heinz Weihrich, published in 1982. This provides an orientation for the situation analysis as a central part of the entire SWOT analysis.

Apart from that, it should be mentioned that the matrix listed here is already one of the most common tools and can be created by anyone. Apart from that, however, it is indeed advisable to use external service providers who have specialized in such areas for the analysis of the external factors.

Advantages and disadvantages of the SWOT analysis

According to WHOLEVEHICLES.COM, the advantages and opportunities of the SWOT analysis have already been indicated several times. It is a flexible instrument for determining the location of a company within the economic or cultural situation and in comparison to other competitors. Another plus is that a SWOT analysis develops strategies which selectively target the improvement of grievances or the defense against threats. With all of this, users benefit not least from the transparency and clarity of such data collection.

A disadvantage of a SWOT analysis, on the other hand, is that it is a snapshot, so that regular renewal is necessary. In addition, there is a high research effort and the results are always dependent on the factors that you have found or determined yourself. There is therefore always a certain degree of subjectivity.

Common mistakes when doing a SWOT analysis

While a SWOT analysis offers a large number of opportunities and advantages, it should also be mentioned that there is a certain potential for error in the creation. It is precisely this that needs to be considered in advance and the work processes to be optimized accordingly. Common mistakes include:

  • Performing a SWOT analysis without a clearly defined goal
  • Confusing internal strengths with external opportunities
  • Confusing the actual SWOT analyzes (states) with the possible strategies / necessary actions (actions)
  • Missing prioritization during the SWOT analysis and thus stagnation in the derivation of strategies and measures

Alternatives to the classic SWOT analysis

As mentioned, the SWOT analysis is associated with a certain amount of effort, especially in terms of the necessary research and the participating staff. In addition, it is the case that such an analysis is generally based on a highly rational view of the respective conditions and, as a rule, equally rational strategies and measures are taken. This is one of the risks of a SWOT analysis.

Therefore, it should not go unmentioned that sometimes alternatives in the sense of a more reactionary and dynamic approach are valued. Marketing in particular plays an important role, with the aim of analyzing public perception and customer opinions in as much detail as possible. This ultimately results in any necessary actions to improve the position of a company or to optimize productions and work areas.


The definition of the SWOT analysis results from the acronym of the title. Strengths (strengths), weaknesses (weaknesses), opportunities (opportunities) and threats (risks) are taken into account in this analysis and, in combination with one another, enable promising strategies. These can either go in the direction of a mere increase in profit or mean an effective defense against possible threats. Decisive for the meaningfulness of a SWOT analysis is always the objective to be defined in advance as well as the success control afterwards by means of key figures.

In order to carry out all the actual analysis, little technical effort is generally required, but human resources and a certain amount of research are required. As a tool or basis for the analysis itself, a simple matrix has proven itself, in which the internal strengths and weaknesses as well as external opportunities and risks are linked.

The SWOT analysis convinces users not least because of its simple and flexible usability. This applies, among other things, to companies, companies, non-profit institutions or even individuals. Depending on the reference, this results in, for example, a marketing instrument, a measure for general business planning or an opportunity for very individual career planning and personality development.

SWOT Analysis 3

Meanings of SWOT Analysis Part I

Meanings of SWOT Analysis Part I

The SWOT analysis, by definition and reference to the term itself, is the analysis of the strengths, weaknesses, opportunities and risks of a company, an organization or the like. It thus functions as a practical instrument for the strategic planning of these very institutions.

Detailed definition and development of the SWOT analysis

But what exactly does SWOT analysis mean? SWOT is an acronym for the English terms Strengths, Weaknesses, Opportunities and Threats. The analysis looks at precisely these aspects individually and in connection with one another in further application.

According to WHICHEVERHEALTH.COM, a SWOT analysis is used, for example, when a company wants to establish itself or already exists and intends to expand its influence or its success. The structure of the analysis is, according to the points mentioned, relatively simple. For example, very rational and direct questions arise:

  • What are the strengths of the company, the company, etc.?
  • What are the weaknesses, however?
  • What opportunities does the market offer?
  • What are the risks against it?

The first question is used to identify your own strengths. What distinguishes a company, what are its particular strengths, how was the current position achieved and are there even unique selling points?

On the other hand, there are weaknesses. This includes in particular features where competitors obviously do better work, but also general grievances. Likewise, only moderately or not yet fully completed fields of activity are to be taken into account, which actually belong to the general standard of the industry (for example in a library a non-existent internet connection).

Circumstances that open up through external influences and open up opportunities for the institution are to be seen as opportunities. These could be changes in the law, new technological innovations or emerging trends. This also includes the disappearance of a competitor.

In contrast to the opportunities, ultimately, the risks are decisive. This refers to changes that also happen from outside. In this respect, for example, changes in the law and cultural or economic trends are also included here. Furthermore, high competition from other institutions or, in some cases, technological innovations represent a risk. (The library, for example, includes today’s extensive online offer and eBooks.)

Who Invented SWOT Analysis?

Nowadays, the SWOT analysis stands for a systematic situation analysis with which companies, organizations and so on work. Especially in the business plans of a company or a company, the analysis appears nowadays mainly to plan economic success. The SWOT analysis is also a common instrument in marketing.

In fact, however, the origin of the analysis lies in pre-Christian China. A reasoning quote comes from the Chinese general, military strategist and philosopher Sunzi: “If you know the enemy and yourself, you do not need to fear the outcome of a hundred battles.”

The basic idea of ​​the SWOT analysis can already be seen in these sentences. Sunzi therefore recommended knowing yourself, i.e. your own strengths and weaknesses, and also looking at the enemy, i.e. the opportunities and risks facing you. Both aspects in relation to each other ultimately enable a prognosis of success and failure.

The bridge from this metaphorical point of view to actually applicable analysis was built in the 1960s at Harvard Business School. Henry Mintzberg, a Canadian professor of business administration and management, developed the SWOT analysis as a basis for formalizing processes of a company’s strategy development.

What can a SWOT analysis do?

Although the history of the development may seem impressive, the question that certainly arises for entrepreneurs, start-ups and similar institutions is: What is the point of a SWOT analysis? What are the benefits of the sometimes time-consuming process of analysis from an economic point of view?

The analysis can generally already have the effect that resources are used sensibly. It helps ensure that a company does not miss opportunities. It also shows where capital can still be used in order to subsequently increase the company ‘s budget. This in turn can be used to finance new projects and promote certain products.

However, the SWOT analysis can also clarify whether or that a company is at risk from a certain point of view and may even have to file for bankruptcy in the foreseeable future . In this sense, it may also help to prevent an impending bankruptcy by showing which measures achieve ad hoc success or in which area the Achilles heel of a company exists.

On the one hand, figuratively speaking, the analysis functions as a Swiss Army Knife for management and management in threatening economic situations and, on the other hand, as an orientation to maintain a successful course or to optimize it and to achieve certain goals even more safely and quickly.

The goal of the swot analysis

When setting the goal of a SWOT analysis, the main focus is on creating an overview of the current situation. This means that it is a matter of determining the status of an institution, a company or the like in the market and in comparison to the competition. For example, a company can get useful information about itself and use it as orientation for actions.

The analysis can influence strategic management by asking which factors are relevant for success. The analysis is also used to create a general business plan, for example when founding a start-up . In addition, the SWOT factors are helpful guidelines for marketing and customer service .

This means that the actual objective of the SWOT analysis is to show meaningful options for action by considering internal and external aspects, which lead to sustainable success. She offers valuable support in the design of strategies to meet the constant change in the market, the increasing competition and, last but not least, the steadily growing demands of customers.

On the one hand, from an entrepreneurial point of view, the analysis highlights any competitive advantages compared to the competition as well as growth potential that can be directly tapped. In addition to such perspective approaches, it also enables decision-makers to react immediately to existing weaknesses or risks.

In addition, the SWOT analysis and its objectives should not be viewed exclusively with reference to entire companies and organizations. In fact, it is also used for individuals for individual development and reflection. In this sense, by looking at S, W, E and T, very personal talents and abilities can be recognized and various career opportunities can be considered or excluded.

Key figures of the SWOT analysis

The basic goal setting and the subsequent actual analysis are of course only parts of an overall process in order to achieve the set goals. Other sub-processes include the definition of measures and strategies, the associated budget planning and, last but not least, the development of useful key figures in order to be able to control any progress and successes at all.

These key figures, so-called Key Performance Indicators (KPI), can, for example, be mere figures with regard to costs, earnings and sales, which of course only cover superficial points. On the other hand, it makes more sense to go into as much detail as possible with the key figures and to link them as closely as possible with the analysis aspects and the measures taken. It is important to mention that the data collected can be leading indicators on the one hand and lagging indicators for results on the other.

A widespread and quite productive system of indicators would be, for example, one from four perspectives, whereby the first three are to be seen as leading indicators and the last is a late indicator:

  • Customers: Acquisition of new customers and satisfaction of existing customers
  • Organization: operating processes that are as fluid and flawless as possible (both in business and in production)
  • Employees: satisfaction, commitment and constant motivation
  • Finance: profit, profitability and liquidity of the company

Key figures therefore ultimately represent the necessary means of monitoring success. Their definition, in conjunction with the initially set objective of the analysis, ultimately indicates whether and how rich the analyzed data are and what benefits the strategies and measures derived from them actually provided. The key figures show which strengths were used sensibly, which weaknesses were encountered, how opportunities were used and how risks could be averted.

SWOT Analysis 1

Meaning of CFD

Meaning of CFD

According to abbreviationfinder, CFD stands for “Contract for Difference”. With a CFD, investors can make profits by speculating on price changes. Contracts for difference are particularly attractive because they are characterized by a high level of leverage. This means that even a low capital investment can bring high profits. However, leverage can also have a negative effect: If the price does not develop as hoped, investors can quickly lose large sums of money. Therefore, CFDs are not suitable for beginners, only for experienced investors.

  • With a CFD, investors and providers agree to swap money and base value at the beginning and end of the term of the CFD.
  • In order to be able to trade with CFDs, the trader must deposit a security deposit (“margin”).
  • CFDs work with a high leverage effect, which multiplies the margin and moves large sums in the market with little capital investment.
  • This mechanism makes CFD trading attractive, but it also carries a great deal of risk.

What exactly is a CFD?

CFDs belong to the category of derivative financial instruments and are highly speculative. Every CFD is a contract between two parties: the investor (or trader) and the provider (or market maker). You speculate on the price development of a certain base value and agree to exchange money and base value at the beginning and end of the term of the CFD. Various trading values ​​are used as the base value, for example stocks, commodities or currencies.

Trade CFDs

The investor can open a long position or a short position. In the case of a long position, he expects the price of the corresponding underlying asset to rise and buys it. At the end of the CFD he sells it again and in the best case makes the previously speculated profit. With a short position, the investor expects falling prices and sells the value in order to buy it back later at a lower price, which results in a profit if it is successful. Speculation is based on various indices.

CFDs are short-term speculative transactions that are often completed within a day. Accordingly, the prices are exposed to strong fluctuations, which increase the risk of loss. If you want to hold your position overnight, i.e. beyond the close of trading, you have to make compensatory payments. The amount of these payments results from the respective calculation basis, the holding period and the position size at the close of trading. The following applies to short positions: If the reference interest rate exceeds the discount, the investor receives a credit.

The transactions take place over-the-counter (OTC). Not every platform allows CFD trading. The broker comparison shows where CFDs can be traded .

Important terms relating to contracts for difference

Investors who want to trade CFDs should know the following terms:

  • Margin: The margin is a security deposit that the trader deposits when opening a position. How big this has to be depends on the underlying asset being traded. The margin is tied to the respective position; The investor can invest the untied capital in further positions.
  • Leverage: By providing security, investors leverage their capital many times over. For example, if the margin is 1 percent and you bet 100 euros, you can move a capital of 10,000 euros.
  • Stop Loss: Traders set a stop loss limit at the point where their position should be automatically closed if the price should develop in the opposite direction to expectations.
  • Base value: The base values ​​are the actual trading values. These are, for example, indices, stocks, commodities or currencies.
  • Intraday trading: The investor opens his position at the beginning of the day and closes it again at the end of the day. So he carries out the CFD business within a day. This practice is common for short-term speculations.
  • Overnight position: If a trader holds his position beyond the respective close of trading, it becomes an overnight position. Compensation payments are due for this.
  • Diversification: In order to increase their chances of winning, investors often open several positions on different markets. In this way, you distribute your capital over several underlyings and a single loss is less significant.

Risks when trading CFDs

Trading Contracts for Difference is a very risky business. The high leverage can on the one hand bring high profits, but on the other hand lead to large losses. The available total capital is often smaller than the lost amount, which leads to a total loss. However, the account cannot be overdrawn and private investors are not obliged to make additional contributions after a decision by BaFin since August 10, 2017. That at least limits the incalculable risk of loss.

What happens in the event of a total loss?

If the loss exceeds the capital available on the CFD account, the bank usually arranges for a forced closing; that is, it closes all open positions of the investor. Before the deposited security deposit is used up, the bank also warns with an initial margin call when 80 percent of the credit has been used up. A second margin call follows at the 90 percent limit and at the same time announces the impending forced closing.

Other risks

Some of the other risks associated with CFD trading include:

  • Overnight risk: Investors cannot react immediately to price changes in positions held overnight.
  • Market price risk: Underlyings can change.
  • Liquidity risk: In the event of market disruptions and outside trading hours, investors cannot open or close positions.
  • Day trading risk: If a trader makes losses within a day and tries to compensate for them with even riskier new deals, the loss can multiply in the event of failure. The high level of trading activity can also lead to high transaction costs.
  • Bank or market maker insolvency.
  • Organizational and operational risks.

How can the risks be reduced?

The risks involved in trading Contracts for Difference are great and varied. However, through skilled risk management, investors can limit the risk of loss. Before opening new positions, they calculate the profit factor – using the value of the winning trades and the value of the losing trades as well as the average profit and average loss. If the profit factor is greater than 1, the trader can make money, so his plans are profitable. In addition, he should define an initial risk that he is maximally willing to take and set a stop loss accordingly.

A proven money management model is the “1 percent from the account” model. Investors choose a risk of 1 percent for each position. Another means that reduces the risk of loss and increases the chances of winning is diversification.


Meanings of SWOT Analysis Part II

Meanings of SWOT Analysis Part II

The individual points of the SWOT analysis

The usual way of creating a SWOT analysis begins with the two main parts, the company analysis as an internally-related part and the environmental analysis with external aspects. Both can be represented in a simple four-field matrix so that the individual points can be related to one another. From this connection, final deductions of suitable action strategies are possible.

Company analysis (internal)

As the name suggests, company analysis relates to the company itself. This is a self-observation to determine the internal factors, divided into strengths and weaknesses. These are to a certain extent the result of organizational processes, but also characteristics such as a good image. It should be noted that these are properties that we have produced ourselves. For example, a satisfied customer base of a green electricity producer may be a strength, but the generally good image of green electricity is definitely an (external) opportunity.

Environmental analysis (external)

Such external factors are in turn incorporated into the environmental analysis. This relates to the direct corporate environment, i.e. all external circumstances that are related to the company. The two characteristics to be analyzed here are opportunities and threats. It is important that these external conditions and / or changes are exogenously acting forces. This means that the company itself cannot influence this under any circumstances. Here it is important to keep an eye on those factors and to anticipate any changes. Once again, it is important to correctly differentiate between strengths and opportunities or weaknesses and risks. According to abbreviationfinder, SWOT stands for Strength, Weakness, Opportunity, and Thread.


The internal strengths of a company, an institution and so on are those things that it is good at, so to speak. In particular, strengths are revealed when comparing with competitors, i.e. when looking at the competition. Examples of this are aspects such as a favorable location, good product quality, high qualification of the employees or whether the company works with low fixed costs . For the analysis of an individual, aspects such as a fundamental, ambitious optimism, creativity or certain professional qualifications should be mentioned here.


The weaknesses are naturally the counterpart to the strengths. In this respect, they can also be filtered out particularly with a view to the competitive situation and the competition. Internal weaknesses, for example, an unfavorable location , deterioration in quality, too few employees or low financial strength are appropriate to the strengths . Certain dependencies may also be an individual weakness. For an individual, a lack of know-how, low resilience or low mobility must be listed accordingly.


The third point is to discuss the opportunities. The development in the respective market or in the environment is particularly important for a company, as it can be seen whether there is a change in customer behavior. In addition, trends can arise which influence the position of a company. New technological developments may also offer potential for profitable innovations, product improvements or the optimization of work processes. For example, this includes an expansion of the infrastructure, the broadband network (also for communication between employees) or positive demographic developments.

Threads (Risks)

Always important and a decisive point are ultimately the risks that arise from the outside. Such risks can ultimately have a negative impact on sales and, in the worst case, even lead to bankruptcy. It may be necessary to comply with legal changes or to enter into competition with new competitors who may even have settled in the immediate vicinity. In a sense, all opportunities in the opposite direction can pose a risk. This means that infrastructural or demographic developments also play a role here. Last but not least, one or the other risk can arise spontaneously, but all the more threatening – for example, the sudden lack of availability of a sales partner.


After creating the aforementioned matrix with the SWOT factors just listed, the next step is to combine the individual points. The internal and external points are combined in detail. In other words, there are four connections in total: strengths with opportunities, strengths with risks, weaknesses with opportunities, and weaknesses with risks.

From these relationships to one another, suitable measures, actions and strategies can be developed that correspond as closely as possible to the objective of the analysis. To a certain extent, these actions can be summarized as expanding (strengths-opportunities), safeguarding (strengths-risks), catching up (weaknesses-opportunities) and avoiding (weaknesses-risks).

SO (Strengths & Opportunities)

The combination of strengths and opportunities is about determining which strengths of a company or similar can contribute to realizing opportunities. Which positive factors can be used here in order to benefit as much as possible from the opportunities that arise? In particular, expansion strategies are aimed at researching and realizing export and investment potential.

ST (Strengths & Threads)

The strengths and risks connection on the other hand ideally enables protection against emerging or general (economic, cultural, political etc.) threats. The strategies developed from this are accordingly aligned in such a way that they effectively use internal strengths to cushion, offset or even completely ward off environmental hazards, i.e. ultimately to counter these external circumstances for the benefit of the company.

WHERE (Weaknesses & Opportunities)

The combination of weaknesses and opportunities is less about the use or benefits of internal factors, but more logically about reducing those negative characteristics. In other words, in relation to favorable circumstances that arise, it is necessary to develop strategies and measures that reduce or eliminate the respective weaknesses as far as possible. When it comes to catching-up strategies, the consideration of whether weaknesses can even become strengths plays a central role.

WT (Weaknesses & Threads)

Ultimately, the most difficult and most drastic combination is that between existing, internal weaknesses and emerging, external risks. Above all, it is imperative to reduce weak points and negative factors in order not to let them develop into risks. Thus, more than any other, security strategies need to be developed selectively and applied accordingly. It is important to analyze exactly in which weak company aspects occurring risks would weigh particularly heavily.


For the elaboration of all mentioned parts of the analysis and the representation of the SWOT factors, a graphic representation has always proven itself. As mentioned, the simplest and yet effective option for this is a corresponding matrix with four fields for the respective SWOT points. A continuation of this simple representation includes the strategies. This shows optimally how the internal and external aspects can be combined with one another and which strategy results from each one.

Areas of application of the SWOT analysis

In spite of all the ease of use, the question of practical application naturally arises. When is a SWOT analysis suitable? In fact, there are numerous areas in which such data collection and analysis makes sense.

First and foremost, there is a simple way of examining the location of an entire institution or individual products, processes or other objects. With the help of the SWOT analysis, it is possible to assess the competitiveness and assess the effectiveness, efficiency and profitability of business areas, product lines and so on.

In connection with this, the analysis process also functions as part of the evaluation of customer opinions and public perception, either in relation to individual objects or an entire company. The SWOT analysis is therefore particularly useful in marketing and should ideally be part of an overall concept. Finally, the advantage of a well-developed SWOT matrix is ​​that it can be used flexibly.

In addition, it provides a reliable method for developing a business plan, a basic business strategy or the (re) alignment of a company. In this respect, a SWOT analysis is also suitable for start-ups and self-employed entrepreneurs. Furthermore, the usefulness of such an analysis should be mentioned again when it comes to individual career planning.

How is a swot analysis created?

The preparatory work for the actual SWOT analysis first of all represents the basic objective. In fact, an effective analysis only succeeds if the respective data is related to a clearly defined goal. The mere increase in sales, for example, is a rather vague goal, while the increase in sales for a certain product allows more depth in the analysis as a target.

In order to then create a SWOT analysis, a whole team should work together at best. This ensures that all factors are perceived from different perspectives and thus determined as correctly as possible. It is useful to work with questions and to target the individual SWOT factors. For example:

  • What can the company do better than a competitor / the competition? (Strengthen)
  • Where does the company’s good image come from? (Strengths or opportunities)
  • Which areas did you notice again and again due to problems? (Weaknesses)
  • Why do other companies get the contract? (Weaknesses or risks)
  • Which trends are affecting the company now or in the future? (Opportunities or risks)
  • Which cultural / political developments are foreseeable? (Opportunities or risks)

Evaluation of the SWOT analysis

Following the complete analysis of all factors, the combination of the matching points takes place as an evaluation, as mentioned. This evaluation finally results in the strategies that support the objectives set in advance. Once those strategies have been implemented and implemented, there is finally a need for control. The corresponding key figures are then used, as already described.

SWOT Analysis 2

Meanings of EBIT

Meanings of EBIT

EBIT is earnings before interest and taxes . This means that the operating result is presented independently of the amount of taxes and the forms of financing used, so that a company can evaluate itself in an international comparison.


“Earnings before interest and taxes” , or EBIT for short via abbreviationfinder, is also referred to as the operating result of a company. This is a key figure from business administration , with the help of which the profit that a company has achieved in a certain period of time can be read.

The amount of taxes and interest charged on loans vary in different countries. Therefore, after deducting these costs, the result of the respective company is no longer suitable for an international comparison. The operating result, however, is not influenced by such factors. It is therefore useful for evaluating a company, for example.

EBIT calculation

EBIT is calculated using either the total or cost of sales method. Both methods serve the profit and loss account of a company and offer the possibility of determining the EBIT as a subtotal.

In the total cost  method, you compare the sales in a period with the total expenses incurred in the same period. In the cost of sales method , however, the sales of a period are compared with the direct production costs for these sales.

When comparing the EBIT of two or more companies, it is not just a matter of looking at the key figure itself. In addition, you should make sure that it was determined using the same method in all cases. Roughly speaking, the total cost method is a variant that is more commonly used in the German-speaking area. The cost of sales method, on the other hand, is more likely to be used in the Anglo-Saxon region and for companies that are listed on the stock exchange or are internationally active.

A simple formula for calculating EBIT:

Net income
+ tax expense
– tax income
+ interest expense or other financial expenses
– interest income or other financial income Earnings

What does the EBIT figure say?

EBIT is a key figure for the operating result . All expenses that you cannot assign to the actual activity of the company are filtered out for the calculation. Extraordinary expenses and income are also not taken into account.

Interest and taxes are ignored in the calculation, as they do not relate directly to the result of the operating business. This is how the company’s operating result is presented.

EBIT margin

You don’t take interest and taxes into account when calculating EBIT. Therefore, it is also possible to compare companies from several countries in this way. The results from the balance sheet are often falsified by various tax or interest rates, but this is not a problem with EBIT.

The so-called EBIT margin , which you can calculate as follows:

100 * EBIT / sales = EBIT margin in percent

It indicates how high the operating result was in relation to the company’s annual turnover . In short, a higher value means that a company is operating particularly economically. However, you should note that the value can differ significantly from industry to industry. Comparisons across different industries should therefore not be based on this margin.

The general rule of thumb is that a company with a margin of less than three percent is considered not very profitable or even vulnerable to crises. On the other hand, there is high profitability if the value is more than 15 percent.


Meanings of ROI

Meanings of ROI

The ROI is a business key figure for the return of a company. It is measured by the percentage ratio of capital employed and profit achieved and can be used to assess an investment, the performance of a branch of business or the entire company.

Short for ROI by abbreviationfinder, return on investment is a key figure calculated from the return on sales and capital turnover, from which the relationship between the capital invested and the profit achieved results. It can also be viewed as the profit target of a company or an individual business area, since it expresses the amount of the expected return flow of capital from an investment.

The ROI thus serves as a benchmark for the performance of companies or certain areas of the company in a certain period of time. Its development goes back to the American engineer Donaldson Brown, who defined it in 1919 while working for the chemical company Du Pont de Nemours. It is considered the key figure of the so-called Du Pont scheme. This oldest and most well-known key figure system, with which balance sheet analyzes can be created from several such figures and entrepreneurial decisions can be controlled, was also developed by the eponymous group. In the Du Pont scheme, the ROI is defined as the multiplication of return on sales and capital turnover.

The calculation of the ROI makes sense if it can be assumed that an investment will pay for itself within its expected useful life. In the IT sector, this useful life is set very low at around three years; it can be significantly longer for buildings and production facilities.

Calculate return on investment – ROI formula

To calculate the ROI, the total capital of a company, which is made up of equity and debt , is always used . However, according to a modernized variant of the ROI, individual investments can also be calculated. The prerequisite for this is that the returns from the individual investment are already known.

The ROI formula explains: This is behind the key figures for the calculation

Return on sales

The return on sales (also: Return of Sales (ROS) or net return on sales) is understood to be the relationship between profit and sales within an accounting period. It can be used to read what the percentage profit of a company is in relation to a certain turnover. For example, if the return on sales is 10%, this means that ten cents of profit were made for every euro wagered. If the return on sales increases, this is an indication of increased productivity, a falling ROS, on the other hand, means less productivity and thus increasing costs. The ROS can be calculated using the formula:

Return on sales = (profit / net sales) x 100

Capital turnover:

The capital turnover indicates the ratio of equity or total capital employed to sales. The reference values ​​for calculating the capital turnover are the average total capital and the sales revenue within a business period. By determining the capital turnover, it is possible to determine how much turnover was achieved with a certain amount of capital in a certain period. The formula for calculating the capital turnover is:

Capital turnover = net sales / total capital (or invested capital)

An example of calculating the ROI

A company is investing 50,000 euros in advertising in order to acquire new customers. For this measure, the company achieved a turnover of 60,000 euros.

The capital employed is therefore 50,000 euros, the generated sales 60,000 euros and the profit thus 10,000 euros (sales – costs = profit). Now just insert into the formula:

Return on Investment (ROI)

= Return on sales x capital turnover

= (Profit / sales) x (sales / invested capital)

= (10,000 / 60,000 x 100) x (60,000 / 50,000)

Return on Investment (ROI) = 2 = 20%

The calculated return on investment is therefore 2 or 20%, which means that with every euro invested, a profit of € 0.20 (and € 1.20 sales) was generated.

Analysis and interpretation of the ROI

For a correct analysis of the ROI, it is important to note that only monetary and internal company factors are recorded by it. Influences that have to do with the market situation, image values, customer satisfaction, risks and competitors as well as time factors are not included. For this reason, the return on investment should never be used as the only analysis tool for evaluating company performance.

As can be seen from the formula, the ROI is characterized by two important company-specific key figures and can therefore:

  • have the same result in different combinations,
  • be increased if the return on sales (profit: sales) decreases but the capital turnover (sales: capital) increases,
  • The independent analysis of return on sales and capital turnover are carefully examined for changes and their causes.

The results of the ROI calculation in percent can be compared with profit targets: A return on investment of 7.5 means that a certain amount of capital has brought in a return of 7.5%. Such a value can be quite satisfactory for traditional companies, whereas growth sectors will aim for values ​​between 15 and 25%.

What are the advantages of calculating the return on investment?

Even if the informative value of the ROI is limited by the mentioned limiting factors, there are some undeniable advantages. The ROI provides important data for:

  • the analysis and comparison of individual company areas and investment objects,
  • the determination and consideration of the overall performance of a company for a past period,
  • the planning and control of future investments

The importance of ROI in marketing

In marketing, the return on investment is particularly important in order to be able to plan in advance and finally evaluate the efficiency of an advertising campaign from a financial point of view. For this purpose, two units of measurement are used that are directly related to the ROI.

ROMI (Return on Marketing Investment, also: RoMI):

The ROMI measures – just like the ROI – the relationship between capital employed and profit achieved. However, it only relates to the marketing sub-area. For the calculation, the entire effort for a marketing measure, such as product costs, marketplace costs and pricing, is included. The formula for calculating the ROMI is:

ROMI = (net sales – product costs – advertising costs) / advertising costs

ROAS (Return on Advertising Spend):

The ROAS represents the profitability of an advertising measure. Among other things, it is used to implement measures that increase the quality of the measure and / or reduce the costs if the results of a campaign are negative.
The ROAS is calculated using the formula:

ROAS = (net profit / advertising costs) x 100


Meanings of SME

Meanings of SME

Over the years, after the Second World War, Germany has become the world export champion . Today the country is one of the undisputed industrial nations. Our country also owes this to its SMEs , because they represent the strong backbone of the German economy. You can find out what you need to know about SMEs in the following article.

SMEs – European Commission definition

SMEs are defined more precisely by the European Commission in EU Recommendation 2003/361 . The basis for this are defined orders of magnitude. According to Abbreviationfinder, SME stands for small and medium-sized companies , which are divided into the number of their employees and the turnover or their balance sheet total .

Good to know:

In the EU, SMEs are also known as SME . This abbreviation comes from English and means small and medium-sized enterprises . This definition is important because it is crucial for access to finance , but also for access to EU support programs. These are specifically geared towards SMEs and their definition.

Definition of micro-business

A micro enterprise is very often called a micro enterprise . It is a company that employs less than 10 people and

  • generates a turnover of less than two million euros per year or
  • has a balance sheet total of less than two million euros.

In most cases, the owner of a very small business pursues the goal of feeding or providing for themselves and their families with this business.

Definition of small businesses

According to the definition of the European Commission, small companies have fewer than 50 employees and

  • have a turnover of less than 10 million euros or
  • a balance sheet total of less than 10 million euros.

Definition of medium-sized companies

A medium-sized company is a company that has fewer than 250 employees and

  • the turnover at a maximum of 50 million euros or
  • total assets of a maximum of 43 million euros.

Criteria for classifying SMEs

Micro, small and medium-sized businesses

In the EU there have been threshold values ​​or criteria since 01.01.2005 according to which an SME is classified. This classification is already evident from the definitions above. Seen at a glance, these criteria are as follows.

These thresholds are valid for all sole proprietorships . If it is a company that functions as part of a group, to have it both number of employees and turnover or find total assets of the whole group into account.

How do I calculate my number of employees?

The size of a company (up to 10 employees, more than 10 up to 50 employees and more than 50 employees) always refer to full-time employees in the DGUV according to regulation 2 . This means that when calculating the number of employees , you have to convert all your employees , including part-time employees, into full-time employees .


To do this, proceed as follows: All employees who do not work more than 20 hours per week must be multiplied by the factor 0.5 . If someone works for you for a maximum of 30 hours per week , you have to use the factor 0.75 as a basis for the calculation .


You are an entrepreneur and you have 13 employees . These are made up as follows:

Type of employee calculation
4 full-time employees 4x factor 1 = 4.0
7 employees part-time until 8 p.m. 7x factor 0.5 = 3.5
2 MA part-time up to 30 / h 2x factor 0.75 = 1.5
Number of employees converted to full-time 9.0

That would mean that you would have 9 employees in your company and you would therefore fall into the category of small businesses .

Definition of annual sales

The annual turnover is understood as the total turnover of all services sold in a year . The annual turnover, also called revenue , is calculated from the sales volume and the sales price .

Definition of annual balance sheet total

The annual balance sheet total is understood to mean the total of fixed and current assets on the assets side of a company and the total of equity and debt capital on the liabilities side at the end of a financial year.

Definition of company types

In addition to the size classification of SMEs, the different types of companies are also precisely defined.

Independent company

Companies that do not hold any shares or voting rights in a company are referred to as independent companies . This must not exceed a share of 25 percent . There are exceptions , however , if this threshold is 25 percent or more.

  • Government venture capital companies, universities or research institutes without the purpose of profit tracking, venture capital companies
  • Institutional trading investors (including development funds)
  • Local authorities that are autonomous and that have an annual budget of less than 10 million euros and fewer than 5,000 inhabitants


These are companies that must meet at least one of the following requirements:

  • The company is required to prepare consolidated annual financial statements .
  • The company holds the majority of voting rights by shareholders or members of another company.
  • The majority of employees from administration, management or from the supervisory body of another company can be appointed or dismissed by the relevant company.
  • The company is entitled to exercise control over another affiliated company through a concluded contract or through clauses in the articles of association .

These points also apply when there is a reversal in the relationships between the companies .

Partner company

A partner company is a company which, together with one or more other companies or alone, holds from 25 percent to a maximum of 50 percent of the capital or voting rights in another company . The same limit values ​​also apply to shares held.

KfW templates and calculation sheets for SMEs

A KfW calculation scheme forms the basis for calculating the threshold values . This is available to every SME. You can find out what needs to be considered for the individual types of company mentioned above in the KfW information sheet on the definition of SMEs.


Advantages and problems of SMEs

As with a large company, there are also advantages and disadvantages in the area of ​​SMEs , which we will discuss below:

Benefits of being an SME

  • An SME has flat hierarchies and structures in its organization. This gives you short decision-making paths and creates a high degree of flexibility and stability
  • There is a great proximity to the stakeholders . The customer contact is very close and also within the company the cooperation is personal .
  • Due to the proximity to the owner family in an SME, there is in most cases a long-term orientation for such companies .
  • Continuous innovations are created by selected and motivated employees.

Even if SMEs are also seen as job engines and great training companies, there are also challenges here:

Difficulties that SMEs pose

  • The resources in the areas of finance and human resources are very scarce . It is not uncommon for dual functions to occur.
  • Strategic knowledge and methods are very important for corporate success. However, SMEs very often lack the necessary knowledge here.

Examples of companies from SMEs

Type of business Check criteria SME: yes or no?
Painting company; 17 employees; Turnover per year 45,000 euros; independent company → Number of employees
→ Annual balance sheet total → Annual
→ Company type
SME small business
Auto repair shop; 10 employees (including 3 part-time with 20 hours a week); Turnover per year 39,000 euros; independent company Small and medium-sized enterprises
Manufacturer in office furniture; 310 employees; Annual balance sheet total of 58 million euros; Partner companies with a stake in a supplier company of more than 30% Not an SME
Meaning of Stock Exchange

Meaning of Stock Exchange

Stock exchanges look back on a long tradition. The oldest stock exchange was founded in 1409 in Bruges, Belgium. The first German stock exchanges are the Augsburg Stock Exchange and the Nuremberg Stock Exchange, founded in 1540. The Frankfurt Stock Exchange, today the most important stock exchange in Germany and also one of the most important stock exchanges worldwide, was founded in 1585. Exchanges are not only used for trading in securities, but also for organized trading in almost all products. In addition to stock exchanges, there are also special exchanges for raw materials, foreign exchange and in London even a wine exchange, the Liv-ex. For example, CO2 certificates are traded on the European Energy Exchange (EEX) based in Leipzig.

  • Exchanges serve to trade a commodity – be it a commodity, stocks or bonds – in a regulated market environment.
  • The aim of exchange trading is not only the transaction itself, but also the creation of transparency with regard to pricing.
  • Trading on the stock exchanges in Germany is controlled by the Bafin.

How the stock exchange works

Exchanges serve to trade a commodity – be it a commodity, stocks or bonds – in a regulated market environment. The value results from supply and demand. The emphasis here is on the term “regulated market environment”, as the exchange, in contrast to a classic marketplace, follows certain rules for trading. In order for a share to be traded on the stock exchange, the company must meet certain requirements that are regulated in the German Stock Exchange Act (BörsG). According to abbreviationfinder, SX stands for Stock Exchange.

The aim of exchange trading is not only the transaction itself, but also the creation of transparency with regard to pricing. Floor trading, the physical presence of stock brokers in the trading room, was completely replaced in Frankfurt by XETRA trading, computer trading, in 2011.

Stock exchanges in Germany

Trading on the stock exchanges in Germany is controlled by the Bafin. In addition to the Frankfurt Stock Exchange, there are also several regional stock exchanges in Germany. After the closure of the Bremen Stock Exchange, these are still the following marketplaces:

  • Berlin Stock Exchange
  • Düsseldorf Stock Exchange
  • Joint Börsen AG Hamburg-Hanover (including the Hanseatic Stock Exchange as part of BÖAG Börsen AG)
  • Munich Stock Exchange
  • European Energy Exchange, Leipzig
  • Stuttgart Stock Exchange: The Stuttgart Stock Exchange offers investors the opportunity to trade shares in open-ended investment funds. Under certain circumstances, this can be cheaper than purchasing with a front-end load.
  • Tradegate Exchange, Berlin: Tradegate Exchange was the first exchange for fully electronic off-exchange securities trading. Over-the-counter trading enables buyers and sellers to come into direct contact with one another.

The stock market indices

When investors think of the term stock exchange, they primarily think of trading in stocks. The most important indices are therefore also the stock indices. In Germany these are the DAX 30, the M-DAX and the S-DAX. The DAX 30 is made up of the 30 largest German companies, the M-DAX from the next 50 largest companies in the ranking. In the S-DAX, in turn, the following 50 companies belonging to the M-DAX, so-called small caps or smaller public limited companies, are listed. The TecDAX comprises 30 of the 50 largest technology companies.

Indices generally reflect investors’ expectations of a country’s future economic performance. In the US, the best-known indices are the S&P 500 and the Dow Jones Industrial. On the Paris stock exchange, the CAC 40 reflects the performance of the 40 largest companies in France, in Great Britain the FTSE 100 performs this task. The Eurostox 50 indexes the performance of the 50 largest companies within the euro zone.

The trade

Exchange trading cannot be done by every investor himself, but requires the involvement of an official stockbroker. The purchase order for a security is forwarded via the custodian bank and processed in the now fully electronic trading system XETRA. The broker, in turn, virtually brings buyers and sellers of a share together, provided they both have the same asking price. In addition to the brokerage fee for the bank, there is also a commission for the broker when trading securities.

The five most important stock market events

  • On October 28, 1929, the New York Stock Exchange collapsed: the Great Depression had hit the stock exchanges. It was not “Black Friday”, as is often reported today, on October 25 of this year the Dow Jones Industrial Index even rose again.
  • In 1997 the bubble of artificially high foreign exchange rates in emerging markets in Asia. The devaluation of Bath in Thailand was the trigger. The prices on the Hong Kong stock exchange fell by 40 percent.
  • Technology stocks had soared up to March 2000 that ended in a bubble. When the dotcom bubble burst, investors lost over 200 billion euros.
  • Due to the financial crisis, prices slipped massively in 2008. Individual stocks were repeatedly suspended from trading, and the Moscow Stock Exchange kept closing its doors for a few days.
  • In the spring of 2014, the DAX 30 reached its all-time high of over 10,000 points for the first time – proof that the stock market keeps going up in the end.


Meaning of Balance Sheet

Meaning of Balance Sheet

A company’s balance sheet provides information about the origin and use of a company’s capital. The assets, the assets, and the capital, the liabilities, are compared. A profit and loss account is the basis for creating a balance sheet. The balance sheet is a company analysis set for a specific date, while the profit and loss account documents the business success for a specific period. Double-entry bookkeeping is a prerequisite for preparing a balance sheet.

  • The balance sheet provides a legally binding overview of the company’s assets and the business activities carried out in the past financial year.
  • The preparation of a balance sheet is based on the requirements of the German Commercial Code (HGB).
  • The balance sheet is broken down into assets and liabilities.
  • At the end of the day, the balance sheet must be balanced, i.e. on each side, assets and liabilities, the bottom line is the same number.

The tasks of the balance sheet

Short for BS by abbreviationfinder, a balance sheet fulfills different functions. On the one hand, it is used for documentation. It provides a legally binding overview of the company’s assets and the business activities carried out in the past financial year. The equity account provides information about the profit or loss of a company and thus offers options for comparison with previous accounting periods. The detailed list of profits and losses is based on the profit and loss account drawn up beforehand. Last but not least, a balance sheet also contains an information function.

Accounting requirements according to the Commercial Code

A balance sheet cannot be drawn up at the company’s reasonable discretion, but is based on the requirements of the German Commercial Code (HGB). The HGB also regulates which companies are required to be accounted for. This generally includes all corporations as well as sole proprietorships and tradespeople with an annual turnover of more than 500,000 euros or a profit of 50,000 euros. If these numbers are exceeded for the first time, the tax office will request the balance sheet. As long as the claim is outstanding, the conventional profit and loss account can be continued. Freelancers are generally exempt from accounting.

The structure of the balance sheet

Section 266 of the German Commercial Code clearly stipulates how a balance sheet must be structured. The breakdown is made into assets (use of funds) and liabilities (source of funds).


  • Fixed assets, divided into intangible assets, property, plant and equipment and financial assets
  • Current assets, divided into inventories, receivables and securities
  • Prepaid expenses (existing but not yet due receivables)
  • Deferred tax assets (future tax benefits)
  • Difference from asset offsetting
  • Deficit not backed by equity


  • Equity, divided into subscribed capital, capital reserve, retained earnings, profit or loss carried forward, net income or net loss for the year and net loss for the year not covered by equity
  • Provisions, divided into provisions for pensions, taxes and other provisions
  • liabilities
  • Prepaid expenses (existing but not yet due liabilities)
  • Deferred tax liabilities (future tax charges)

At the end of the day, the balance sheet must be balanced, i.e. on each side, assets and liabilities, the bottom line is the same number at the end. The shortfall not covered by equity arises when the losses in the reporting period are so high that they exceed equity. Bank balance sheets deviate from the presentation of a normal trade balance sheet and are almost mirror-inverted in the division of the asset and liability positions. A company’s financial assets on the assets side represent liabilities on the liabilities side of a bank. The internationalization of companies has led to an international approach to the structure of the balance sheet according to the International Financial Reporting Standards (IFRS), which deviates from the national requirements for preparing a balance sheet .

Basis of accounting

A balance sheet is based on the “principles of proper bookkeeping”, which in turn are regulated in the HGB. In the end, a balance sheet follows two principles. On the one hand, it is based on the “balance sheet truth” and on the other hand on the “balance sheet clarity”. Truthfulness of the balance sheet means that all business transactions are properly recorded in accordance with the accounting guidelines. The principle of balance sheet clarity states that business transactions and the resulting documentation must be booked in such a way that they are clearly visible and traceable. All facts that are known between two reference dates must be taken into account in the balance sheet.

Delayed business transactions

If a company accounts as of 31.12. one year, it may well happen that the receipt of a service and its payment do not take place in the same year. A classic example is the telephone bill. Calls made in December will not be billed until January. Against this background, balance sheets are rarely drawn up in January, but only two to three months after the balance sheet date. Services that have already been performed on the other side but have not yet been paid for must also be assessed. Nevertheless, the companies have to submit their balance sheets promptly. This fact means that balance sheets are not always one hundred percent clear.


Meaning of Asset Management

Meaning of Asset Management

The aim of asset management (short for AM by abbreviationfinder) is to further increase existing assets – and to do so as efficiently and with as little risk as possible. The English word “asset” means “fixed assets”. Asset managers look after private as well as corporate assets and try to increase them through various forms of investment. As a rule, investments are made in funds. This requires extensive know-how and skillful risk management. Traditionally, the financial services of asset management are primarily used by investors with larger capital assets (from 50,000 euros). Small investors, on the other hand, have little chance of finding a private asset manager.

  • The decisive advantage of serious asset management lies in the know-how advantage of asset managers over private individuals and companies.
  • While pure investment or investment advisors only take on an advisory role, asset managers usually also make investment decisions.
  • As part of asset management, the assets of private investors and business customers are usually invested in funds.

What does asset management bring?

The decisive advantage of serious asset management lies in the know-how advantage of asset managers over private individuals and companies. Thanks to their market knowledge, asset managers can make long-term, profitable investment decisions even in rapidly changing markets. You have a good overview and – in contrast to private investors, who often tend to make impulsive decisions – are able to make long-term prudent decisions.

These tasks are carried out by asset managers

While pure investment or investment advisors only take on an advisory role, asset managers usually also make investment decisions. They relieve companies and private investors from a whole range of tasks. The classic range of tasks in asset management includes the following points in particular:

  • Information and advice:Asset managers initially have an advisory role. They inform their customers about lucrative investment opportunities.
  • Individual strategy development:Asset managers work out an individual investment strategy together with clients. This is essential because every customer has their own needs. In particular, the investor’s willingness to take risks and personal goals determine the investment strategy.
  • Market observation:Market observation or market analysis is a central component of asset management. The respective managers not only have to have in-depth economic knowledge, but also know current market developments and identify trends as early as possible.
  • Review of investment options:Asset managers review investment products either individually or using statistical methods. In this way, they determine the investment options that best suit the wishes and needs of customers – be it stocks, real estate or life insurance.
  • Diversification and risk management:The aim of asset management is to generate the highest possible return on the assets invested at a comparatively low risk. Risk management is therefore one of the fundamental tasks of asset management. There are a number of methods available to asset managers to limit the risk of an investment. In addition to assessing the security and profitability of certain investment products on the basis of one’s own know-how, diversification plays a decisive role in risk management. Diversifying means: The assets to be invested are distributed across several promising forms of investment. This makes the investment strategy less susceptible to crises and leaves room to counteract possible wrong decisions.

The role of funds in asset management

As part of asset management, the assets of private investors and business customers are usually invested in funds. A fund is nothing more than a collection of different stocks, real estate or bonds that are financed with the capital of various investors.

Anyone who participates in a fund with equity becomes a co-owner and may receive a dividend distribution. If you invest in a package of shares – in contrast to buying shares in a single company – the risk is automatically diversified. A fund always consists of a bundle of securities. If the securities of some companies do not perform as expected, this can be offset by the positive development of others. However, one hundred percent investment security cannot be guaranteed even with funds.

In principle, a distinction is made between mutual funds and special funds. While special funds are aimed at companies, banks and the like, public funds can be used by all investors.

Serious asset management: independence and transparency

Serious asset management is characterized by independence. Ideally, an asset manager is not tied to specific products, but rather chooses investments based solely on the client’s needs. Commissions or other remuneration should also not be paid when brokering certain funds. Bank asset managers should at least have access to the entire product range of the respective institution.

Regardless of whether the services of an independent asset manager or the asset manager of a bank are used: Asset management should be characterized by transparency. This means that potential returns and risks are discussed openly. In addition, any costs and administration fees should be conveyed transparently.


Meaning of ETF

Meaning of ETF

According to abbreviationfinder, ETF is the abbreviation for “Exchange Traded Fund”. Accordingly, ETFs are not traded through the investment companies that set up the respective funds, but on the stock exchange. The price of ETFs is based solely on supply and demand, i.e. purchases and sales on the stock exchange. The fund company’s issuing surcharge does not apply. The support is not provided by fund managers, but by market makers. Their task is to determine the buying and selling prices, which are published continuously during the trading day. Therefore, the development of the ETFs is very transparent.

  • The majority of exchange-traded funds are offered as index funds, which is why both terms are often used synonymously.
  • In most cases, ETFs are based on stock indices due to their large number.
  • One of the great advantages of ETFs over other fund investments is the flexibility of stock exchange trading.
  • The risk of an exchange traded fund depends on the type of fund and the underlying indices.

Participation in index developments

The majority of exchange-traded funds are offered as index funds, which is why both terms are often used synonymously. Index funds are passively managed, which means that they track the development of a specific underlying index. Actively managed ETFs that invest the fund capital in constantly changing, successful values ​​in order to optimize returns and beat indices are extremely rare.

In most cases, ETFs are based on stock indices due to their large number. The best-known share index in Germany that a listed fund can use for orientation is the German share index DAX. This index summarizes the securities of the 30 stock corporations with the highest turnover in Germany. But there is also the possibility of replicating real estate, money market or other indices.

Indices shown

In principle, an ETF can track any index. Depending on the type, they differ for example:

  • Market-wide indices with stocks from all industries and regions (e.g. the MSCI All Country World Index, which takes industrialized and developing countries and small and large companies into account)
  • Region indices with values ​​from individual economic regions (for example the MSCI Europe Index, which only includes European stocks, or the MSCI Emerging Markets Index, which only includes stocks from developing countries)
  • Sector indices that only show the development in a certain branch of the economy (for example the Euro Stoxx Telecommunications Index)
  • Strategy indices that focus on specific industries, growth rates, or time-based data

Advantages of ETFs

One of the great advantages of ETFs over other fund investments is the flexibility of stock exchange trading. The listed funds can be quickly liquidated on the stock exchange, so that investors have access to the sales proceeds after just a few days.

Save fees

Since passively managed ETFs follow changes in the indices, active management that constantly monitors the market and reacts to market changes with new investment strategies is not required. Fund managers don’t have to be paid. The costs are limited to the exchange fee and the total expense ratio charged to the fund and are significantly lower than when purchasing actively managed fund units. This means that passively managed ETFs are also of interest to private investors.

Disadvantages compared to active funds

A disadvantage of passively managed ETFs compared to actively managed funds is that the return is firmly linked to the performance of the underlying index. Ultimately, you aim to map the base index as precisely as possible. Active funds, on the other hand, aim to outperform their benchmark. To achieve this, fund managers change the composition of the fund and fill it with the strongest values ​​if possible. In practice, however, it is seldom possible to permanently outperform the index.

Risks and selection of ETFs

The risk of an exchange traded fund depends on the type of fund and the underlying indices. Safe investments include ETFs that fully track a market-wide index. These funds are characterized by a large diversification of the fund capital. The strength of the index also influences the safety of ETFs. Nevertheless, a risk of loss in stock exchange transactions cannot be ruled out. Before investing in exchange-traded funds, it is worth comparing the offers.