Category: Technology

All You Need to Know About Event Handler and WebAssembly

All You Need to Know About Event Handler and WebAssembly

What is an event handler?

With an event handler, a software developer can control exactly what should happen in the program when a certain event occurs. The triggering events can have different origins, but they are often triggered by the interaction of the user.

Events occur in a wide variety of places in software, e.g. For example, when a user clicks a button, moves the mouse pointer or writes something in a text field. Event handlers have the task of recognizing these events and then executing a predetermined action, e.g. B. to temporarily store the content of a text field as soon as the user changes it.

Event oriented software

Programs that do not always run linearly according to the same scheme and react to inputs and behavior of the user – or, more generally, to changes in state – work with events. This means that at some points in the program it is expected that a certain event could occur.

Accordingly, developers must provide a way in which the program code can deal with these events. For this purpose, it must first be monitored in the program code whether and when a predetermined event occurs. If it is determined that such an event has been triggered, the associated functionality or routine can then be executed.

Interface between code and events

Event handlers are most commonly used to create a connection between elements from the graphical user interface and the code in the background. So programmers have the possibility to react directly to the input of the user or to other events. In addition, working with event handlers allows programs that can react spontaneously and dynamically to events instead of actively waiting for a specific event and thus blocking resources.

Examples

  • The validation for a password field in a form is only triggered when the user has entered something in the associated text field and then left the field again.
  • Only when a user clicks the “Select Date” button should the program display a control for selecting the date.
  • While the user is already entering text in a text field, each new character is checked to determine whether the content exceeds the maximum number of characters allowed.
  • When a user enters a predetermined key or key combination, a predetermined event is to be triggered, e.g. B. the change to a certain view.

What is WebAssembly?

WebAssembly is intended as a supplement to Javascript in the web browser. It’s a byte code that is supposed to help increase the speed of web applications. However, the system still has certain weaknesses.

WebAssembly, or Wasm for short, has the task of eliminating some of the shortcomings of Javascript . This language is actually too messy and has too soft a syntax for the central role it has played in web development for quite some time. It is not sufficient for modern performance requirements.

On a small scale, the developers have with emscripten resorted. C programs are compiled and executed in a virtual machine of the browser. On the whole, however, the whole thing does not work because Javascript cannot offer the necessary performance.

WebAssembly is designed to solve exactly these problems. It is a bytecode that constitutes an alternate virtual machine that has sufficient performance.

WebAssembly simply explained

The easiest way to explain what Wasm does is with a metaphor: A hybrid vehicle has an electric motor and a petrol unit. The electric motor is the normal Javascript environment of the browser. The engine is completely sufficient for everyday tasks. However, he has performance limits. In such moments, the gasoline engine (the Wasm machine) must be switched on in order to eliminate this. The following things must be taken into account:

  • WebAssembly and the Javascript environment need to be able to communicate with each other to determine when the additional power is needed.
  • Wasm provides so-called “performance islands” and does not continuously provide additional services. In practice, the code compiled for WebAssembly is loaded and executed for this in the form of modules.
  • So that the modules can be generated, compilers are required that understand and can process the Wasm format.
  • Such compilers are available for C, C ++, Rust, Blazor and Clang, for example.
  • The compilers must ensure that handovers can take place in both directions. To stay in the picture: the petrol engine must be able to indicate when its power is no longer needed and not just receive commands.

The advantages of WebAssembly

All major browser developers (Microsoft, Google, Apple, Mozilla, Opera) are driving the development of WebAssembly as part of the “Open Web” platform. Since 2017, all popular browsers have supported the associated modules, with the exception of Internet Explorer, which has now been discontinued. The following advantages have been shown:

  • Web applications load faster.
  • The corresponding apps work more efficiently.
  • Wasm environments are safe because they run in a sandbox.
  • The bytecode is openly accessible and can be used accordingly for development and debugging.
  • In theory, it is even possible to use WebAssembly as a virtual machine outside of browsers. An interface called WASI should also make this possible in practice.

WebAssembly problems

One major problem WebAssembly struggles with is the difficulty of implementation . At the moment the Wasm modules can only be loaded in the browser by either manually programming a separate JavaScript loader or by using the “shell page” generated by Emscripts.

The latter, however, is difficult and sometimes impossible: Many of the commands in the standard library require a supporting infrastructure in runtime , which, however, is often not available. To take up the above metaphor again: The user currently still has to ensure that the electric motor explains to the gasoline engine what is required every time. Unfortunately, he does not understand some commands, which therefore have to be explained again and again (or saved using a new library that you can create yourself).

The developers definitely have a solution for this problem. script tags should help to load the modules in the future. So far, however, this approach has not had any noticeable success.

What is WebAssembly

All You Need to Know About Software Quality Assurance

All You Need to Know About Software Quality Assurance

What is Software Quality Assurance?

Quality assurance is important for software providers and their products, developers and clients. The procedures and standards specified by a QA, or Quality Assurance program, prevent product defects before they arise.

The term Quality Assurance (QS), the English name is Quality Assurance (QA), is fundamentally associated with systematic processes. The focus of the consideration is the question: Does a product to be created meet the requirements placed on it? Corresponding security practices have existed for generations.

Today ISO, the International Organization for Standardization, deals with the mapping and certification of quality assurance practices. In a formalized form, QS has its origins in the manufacturing industry, where quality assurance processes are described in the ISO 9000 family of standards. Corresponding security concepts now determine the processes in most industries and quite explicitly in software development.

Quality: assurance versus control

A basic definition of the term is used to distinguish between the terms “quality assurance” and “quality control”. There is no doubt that the two terms have functional similarities, but there are notable differences in content:

  • Quality control is a production-oriented process. This includes processes such as testing and troubleshooting.
  • Quality assurance refers to a systematic process to ensure a specified requirement profile. The point is to prevent faulty program code and to deliver error-free products in this way.

The software developer and QA models

Quality assurance is central to software developers. After all, it is a matter of avoiding errors in software products before errors manifest themselves. On the one hand, this approach reduces development time and, on the other hand, it has a cost-reducing effect. Strategies for ensuring software quality are correspondingly diverse .

A model geared towards performance optimization bears the abbreviation “CMMI” for Capability Maturity Model Integration. This is an evaluation scheme that is based on degrees of maturity, which are ranked with the aim of optimization. Other models, all of which have the goal of increasing work efficiency, have the following names:

Waterfall model : traditional, linear development approach. In the classic step-by-step approach, functional processes such as program design, code implementation , test phase, corrections and final approval are lined up.

Agile: team-oriented development methodology; the so-called ” sprint ” represents a step in the work process.

Scrum : A combination of a waterfall and agile model, in which development teams are formed for special tasks. In this model, each task is divided into several sprints.

Tools and methodology in quality assurance

How can the software developer ensure that a high quality end product is delivered to the client? Testing the software program is an elementary part of software quality assurance. Today the developer has numerous modern tools and work platforms at his disposal, which guarantee a high degree of automation of the test procedures. Proven tools that are frequently used in software houses are (listed in alphabetical order):

Jenkins: This open source quality assurance program offers the possibility of executing and testing code in real time. Thanks to its degree of automation, Jenkins is predestined for use in fast-moving program environments.

Selenium: another open source product. It enables tests to be carried out in numerous languages ​​such as “C #”, ” Java ” or ” Python “.

Postman: Tool specially designed for web applications. This test tool is available for the Windows, Mac and Linux operating systems. It supports editors and description formats such as Swagger and the RAML formatting.

Conclusion: quality assurance as a distinguishing feature

Quality Assurance (QA) differs from the actual test routine. Rather, it defines the standards to be specified for testing. Ultimately, this is done with the aim of ensuring the suitability of a software product for the requirement profiles specified in the order. Quality assurance teams make a valuable contribution to product quality – a differentiating factor in the competition that should not be underestimated.

Software Quality Assurance

All You Need to Know About K Desktop Environment

All You Need to Know About K Desktop Environment

K Desktop Environment, abbreviated as KDE by abbreviationfinder, has its own input / output system called KIO, which can access a local file, a network resource (through protocols such as HTTP, FTP, NFS, SMB, etc.), or virtual protocols (Camera of photos, compressed file, etc.) with absolute transparency, benefiting every KDE application. KIO’s modular architecture allows developers to add new protocols without requiring modifications to the base of the system.

Finally, (KParts) allows you to include applications within others, thus avoiding code redundancy throughout the system. Additionally, it has its own HTML engine called KHTML, which is being reused and expanded by Apple (to create its Safari browser), and by Nokia.

KDE Project

As the project history shows, the KDE team releases new versions in short periods of time. They’re renowned for sticking to release plans, and it’s rare for a release to be more than two weeks late.
One exception was KDE 3.1, which was delayed for over a month due to a number of security-related issues in the codebase. Keeping strict release plans on a voluntary project of this size is unusual.

Major releases

Major Releases
Date Launching
KDE 1
July 12, 1998 KDE 1.0
February 6, 1999 KDE 1.1
KDE 2
October 23, 2000 KDE 2.0
February 26, 2001 KDE 2.1
August 15, 2001 KDE 2.2
KDE 3
April 3, 2002 KDE 3.0
January 28, 2003 KDE 3.1
February 3, 2004 KDE 3.2
August 19, 2004 KDE 3.3
March 16, 2005 KDE 3.4
November 29, 2005 KDE 3.5
KDE 4
January 11, 2008 KDE 4.0
July 29, 2008 KDE 4.1
January 27, 2009 KDE 4.2
August 4, 2009 < KDE 4.3
February 9, 2010 KDE 4.4

A major release of KDE has two version numbers (for example KDE 1.1). Only the major releases of KDE incorporate new functionality. There have been 16 major releases so far: 1.0, 1.1, 2.0, 2.1, 2.2, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 4.0, 4.1, 4.2, 4.3, and 4.4. All releases with the same major version number (KDE 1, KDE 2, KDE 3, and KDE 4) are supported in both binary code and source code. This means, for example, that any software developed in KDE 4.2.X will work with all KDE 4 releases.

Except during major version changes, alterations never occur with requirements for recompiling or modifying the source code. This maintains a stable API (Application Programming Interface) for KDE application developers. The changes between KDE 1 and KDE 2 were large and numerous, while the API changes between KDE 2 and KDE 3 were comparatively minor. This means that applications can be easily transformed to the new architecture.

Major version changes to KDE are intended to follow those of the Qt Library, which is also under constant development. So, for example, KDE 3.1 requires Qt ≥ 3.1 and KDE 3.2 requires Qt ≥ 3.2. However, KDE 4.0 requires Qt ≥ 4.3 and KDE 4.1 requires Qt ≥ 4.4.

As soon as a major release is ready and announced, it is added to the svn repository “branch”, while work on the next major release begins on the main one (trunk). A major release takes several months to complete, and many bugs found during this stage are removed from the stable branch as well.

Minor releases

Less separate release dates are scheduled for minor releases. A minor KDE release has three version numbers (for example KDE 1.1.1) and the developers focus on fixing bugs and improving minor aspects of the programs instead of adding functionality.

Critics

  • KDE was criticized in its early days because the library on which it is developed (Qt), despite following a development based on open source, was not free. The 4 as September as 2000, the Library began distributing licensed under the GPL 1 and criticisms were gradually ceasing. Currently, and since version 4.5, the library is additionally available under LGPL 2.1.
  • Some outsiders criticize KDE’s similarity to the Windows Desktop Environment. This observation, however, falls on the selection of predefined parameters of the environment; often aimed at making it easier to use for new users, most of whom are used to working with Microsoft operating systems. However, KDE has a high configuration capacity and in its branch 4 it has desktop effects integrated in Plasma and KWin, comparable to those of Compiz.

Project Organization

Featured collaborators
Function Name Origin
Graphic designer
Everaldo Coelho Brazil
Nuno pinheiro Portugal
Programmer Aaron Seigo Canada
David faure
Duncan Mac-Vicar Prett chili
Dirk mueller
Eva brucherseifer
George staikos
Lars Knoll
Matthias ettrich

Like many other free projects, KDE is built primarily through the effort of volunteers. Since several hundred individuals contribute to KDE in various ways (programming, translating, producing art, etc.), organizing the project is complex. Most of the issues are discussed on the different project mailing lists.

Contrary to what you might think of such a large project, KDE does not have a centralized leadership; Matthias Ettrich, the founder of the KDE project, does not have much weight over the decisions and direction of the KDE project. Important decisions, such as release dates or inclusion of new applications, are made by the main developers on a restricted mailing list. Lead developers are those who have contributed to KDE for a long time. Decisions are not made in a formal voting process, but rather through discussion on the mailing lists. Generally this method works very well.

Any user is welcome to report bugs he has found in the Software (“bug”). It is also possible to make requests about new functionalities (“wish”). It is enough to communicate it, in English, on the website enabled for it: KDE Bug Tracking Syst en. In legal and financial matters, the KDE Project is represented by the KDE eV, a German non-profit organization.

KDE - K Desktop Environment

All You Need to Know About SSL

All You Need to Know About SSL

SSL mail server. (from the English Secure Socket Layer listed on abbreviationfinder). It provides security services by encrypting the data exchanged between the server and the client with an algorithm symmetric encryption, typically the RC4 or IDEA, and encrypting the session key RC4 or IDEA using an encryption algorithm public key, typically the RSA.

Origin

Secure Socket Layer is a set of protocols general character designed in 1994 by the company Nestcape communcations Corporation, and is based on the combined application of Symmetric Cryptography, Cryptography Asymmetric (public key), digital certificates and digital signatures for a channel or secure means of communication through the Internet.

Symmetric cryptographic systems, the main engine of encryption of data transferred in communication, take advantage of the speed of operation, while asymmetric systems are used for the secure exchange of symmetric keys, thereby solving the problem of Confidentiality. in data transmission.

Process

When you connect to a secure server (https: // www…), browsers notify you of this circumstance by means of a yellow padlock in the lower part and also allow you to check the information contained in the digital certificate that enables it as a secure server. SSL allows you to collect data such as credit card information, etc. in a secure environment since the information sent through a secure form is transmitted to the server in an encrypted form.

SSL implements a negotiation protocol to establish a secure communication at the level of socked (machine name plus port), in a transparent way to the user and the applications that use it.

When the client asks the secure server for secure communication, the server opens an encrypted port, managed by software called the SSL Record Protocol, located on top of TCP. It will be the high-level software, SSL Handshake Protocol, who uses the SSL Record Protocol and the open port to communicate securely with the client.

The SSL Handshake Protocol

During the SSL Handshake protocol, the client and the server exchange a series of messages to negotiate security enhancements. This protocol follows the following six phases (very briefly):

  • The Hello phase, used to agree on the set of algorithms for maintaining privacy and for authentication.
  • The key exchange phase, in which you exchange information about the keys, so that in the end both parties share a master key.
  • The session key production phase, which will be used to encrypt the exchanged data.
  • The Server Verification phase, present only when RSA is used as the key exchange algorithm, and serves for the client to authenticate the server.
  • The Client Authentication phase, in which the server requests an X.509 certificate from the client (if client authentication is required).
  • Finally, the end phase, which indicates that the secure session can now begin.

The SSL Record Protocol

The SSL Record Protocol specifies how to encapsulate transmitted and received data. The data portion of the protocol has three components:

  • MAC-DATA, the authentication code of the message.
  • ACTUAL-DATA, the application data to be transmitted.
  • PADDING-DATA, the data required to fill the message when using block encryption.

General characteristics

  • SSL implements a negotiation protocol to establish a secure communication at the level of socked (machine name plus port), in a transparent way to the user and the applications that use it.
  • The identity of the secure web server (and sometimes also of the client user) is obtained through the corresponding Digital Certificate, whose validity is checked before starting the exchange of sensitive data (Authentication), while the security of data integrity exchanged, the Digital Signature is in charge by means of hash functions and the verification of summaries of all the data sent and received.
  • SSL provides security services to the protocol stack, encrypting outgoing data from the Application layer before it is segmented at the Transport layer and encapsulated and sent by the lower layers. Furthermore, it can also apply compression algorithms to the data to be sent and fragment the blocks larger than 214 bytes, reassembling them at the receiver.
  • The identity of the secure web server (and sometimes also of the client user) is obtained through the corresponding Digital Certificate, whose validity is checked before starting the exchange of sensitive data (Authentication), while the security of data integrity exchanged, the Digital Signature is in charge by means of hash functions and the verification of summaries of all the data sent and received.
  • Its implementation in the OSI and TCP / IP reference models, SSL is introduced as a kind of additional layer or layer, located between the Application layer and the Transport layer, replacing the operating system sockets, which makes it independent. application that uses it, and is generally implemented on port 443. (NOTE: The ports are the interfaces between the applications and the TCP / IP protocol stack of the operating system).

Present

The most current version of SSL is 3.0. It uses the symmetric DES, TRIPLE DES, RC2, RC4 and IDEA encryption algorithms, the asymmetric RSA, the MD5 hash function and the SHA-1 signature algorithm.

SSL

All You Need to Know About LDAP

All You Need to Know About LDAP

Short for LDAP according to abbreviationfinder, Lightweight Directory Access Protocol is an application-level protocol that allows access to an ordered and distributed directory service to search for various information in a network environment. LDAP is also considered a Database that can be queried. It is based on the X.500 standard. Usually, it stores authentication information (user and password) and is used to authenticate, although it is possible to store other information (user contact data, location of various network resources, permissions, certificates, among other data). LDAP is a unified access protocol to a set of information on a network.

Origin

Telecommunications companies introduced the concept of directory services to Information Technology and Computer Networks, thus their understanding of directory requirements was well developed after 70 years of producing and managing telephone directories. The culmination of this effort was the X.500 specification, a set of protocols produced by the International Telecommunication Union (ITU) in the 1980s.

X.500 directory services were traditionally accessed via DAP (Directory Access Protocol), which required the OSI (Open Systems Interconnection) protocol stack. LDAP was originally intended to be a lightweight, alternative protocol for accessing X.500 directory services over the simpler (and now more widespread) TCP / IP protocol stack. This directory access model was imitated from the DIXIE Directory Assistance Serviceprotocols.

Standalone LDAP directory servers were soon implemented, as well as directory servers that supported DAP and LDAP. The latter became popular in enterprises because it eliminated any need to deploy an OSI network. Now, the X.500 directory protocols including DAP can be used directly over TCP / IP.

The protocol was originally created by Tim Howes (University of Michigan), Steve Kille (Isode Limited), and Wengyik Yeong (Performance Systems International) around 1993. A more complete development has been done by the Internet Engineering Task Force.

In the early stages of LDAP engineering, it was known as the Lightweight Directory Browsing Protocol, or LDBP. It was later renamed as the scope of the protocol had been expanded to include not only directory browsing and search functions, but also directory update functions.

LDAP has influenced later Internet protocols, including later versions of X.500, XML Enabled Directory (XED), Directory Service Markup Language (DSML), Service Provisioning Markup Language (SPML), and Service Location Protocol (SLP).

Characteristics of an LDAP directory

  • It is very fast in reading logs.
  • It allows to replicate the server in a very simple and economical way. Many * applications of all kinds have connection interfaces to LDAP and can be easily integrated.
  • Use a hierarchical information storage system.
  • It allows multiple independent directories.
  • It works over TCP / IP and SSL (Secure Socket Layer).
  • Most applications have LDAP support.
  • Most LDAP servers are easy to install, maintain, and optimize.
  • Given the characteristics of an LDAP, its most common uses are:
  • Information directories.
  • Centralized authentication / authorization systems.
  • Email Systems.
  • Public certificate servers and security keys.
  • Single authentication or SSO for application customization.
  • Centralized user profiles.
  • Shared address books.

The Advantages of LDAP Directories

Now that we have “straightened out”, what are the advantages of LDAP directories? The current popularity of LDAP is the culmination of a number of factors. I’ll give you a few basic reasons, as long as you keep in mind that this is only part of the story.

Perhaps the greatest advantage of LDAP is that your company can access the LDAP directory from almost any computing platform, from any of the growing number of applications readily available for LDAP. It’s also easy to customize your internal company applications to add LDAP support.

The LDAP protocol is cross-platform and standards-based, so applications do not need to worry about the type of server the directory is hosted on. In fact, LDAP is finding much wider acceptance because of its status as an Internet standard. Vendors are more keen to code for LDAP integration into their products because they don’t have to worry about what’s on the other side. Your LDAP server can be any of a number of commercial or open source LDAP directory servers (or even a DBMS server with an LDAP interface), since interacting with any true LDAP server carries the same protocol, client connection packet, and query commands. By contrast,

Unlike relational databases, you don’t have to pay for each client software connection or license.

Most LDAP servers are simple to install, easily maintainable, and easily tuneable.

LDAP servers can replicate some of your data as well as all of it through sending or receiving methods, allowing you to send data to remote offices, increase your security, and more. Replication technology is built in and easy to configure. By contrast, many of the DBMS vendors charge extra for this feature, and it is considerably more difficult to manage.

LDAP allows you to safely delegate authorization-based reading and modification according to your needs using ACIs (collectively, an ACL, or Access Control List). For example, your group of facilities can give access to change the location of employees, their cubicle, or office number, but it is not allowed to modify entries in any other field. ACIs can control access depending on who is requesting the data, what data is being requested, where the data is stored, and other aspects of the record that is being modified. All of this done directly through the LDAP directory, so you don’t need to worry about doing security checks at the user application level.

LDAP is particularly useful for storing information that you want to read from many locations, but that is not frequently updated. For example, your company could store all of the following data in an LDAP directory:

  • The company’s employee phone book and organizational chart
  • External customer contact information
  • Information on the service infrastructure, including NIS maps, email aliases, and more
  • Configuration information for distributed software packages
  • Public certificates and security keys

Using LDAP to store data

Most LDAP servers are heavily optimized for read intensive operations. Because of this, one can typically see a different order of magnitude when reading data from an LDAP directory versus getting the same data from an OLTP-optimized relational database. However, because of this optimization, most LDAP directories do not do well with storing data where changes are frequent. For example, an LDAP directory server is good for storing your company’s internal phone book, but don’t even think about using it as a database repository for a high-volume e-commerce site.

If the answer to each of the following questions is Yes, then storing your data in LDAP is a good idea.

  • Would you like your data to be available through various platforms?
  • Do you need access to this data from a number of computers or applications?
  • The individual records that you are stored change a few times a day or less, as measured?
  • Does it make sense to store this type of data in a flat database instead of a relational database? That is, can you store all the data, for a given item, effectively in a single record?

This final question often makes people pause, because it is very common to access a plain register to obtain data that is relational in nature. For example, a record for a company employee might include the login name of that employee’s manager. It is good to use LDAP to store this type of information. The cotton test: If you can imagine all your data stored on an electronic Rodolex, then you can easily store your data in an LDAP directory.

LDAP