Data and software compression for 64 bit systems in easy steps

Share this!

What is compression?

Compression is the process of converting a file size, made by encoding information data to reduce so that the data can be stored or transmitted more efficiently. This data compression can be achieved, but also in a particular type of data: the binary. This binary file can be in the form of an executable file or dynamic link library (DLL) or any other binary files. In any case, the result is a reduction in the number of bits and bytes, leading to a smaller file size. The size of the data in compressed form relative to its original size is called the compression ratio. Ratios can differ big time depends on the algorithm used to compress and file type.

Why compression?
It still happens all too often in space, even though modern PCs tend to be equipped with relatively large hard drives. A similar problem arises when sending or receiving files over the Internet. It can take a long time to send a big file and even extremely long on a slow connection. So what can possibly be fixed? The answer is to compress the files so they take less space and transmission time.

How to use compression?
One way is to use programs that are specifically created to compress and decompress files. Once compressed, the files can often, will not be used until decompressed again. Thus, compression is good for archival or send. A known example of a compression technology is ZIP, a common standard for compressing data files. For binaries, this way is not possible because the compressed executable would lose all abilities, as it must be independent (see below, is determined as binary). Compression in many cases without the user uses to remember. A modem uses a form of compression when it sends and receives data. Another example is a graphic in JPEG format.

As compression work done?
If you have a file containing text, there can be no simple words, too, their repetitive combinations of words and phrases that consume too productive space. It is the same for binary files with repetitive bits and bytes. It can be media such as images whose data information requires much more space than necessary. However, the document or file can be compressed to reduce this inefficiency electronically.

Getting a compression?
Compression is performed compression algorithms (formulas) to reordering and reorganizing so that it can be stored more economically information data. By encoding information, data can be stored using fewer bits. This is achieved by using a compression program / decompression, which overrides the data structure. Compression reduces information by different ways of representing and factual information. Methods may include simply removing spaces, representing a series of repeated characters using two characters or replace sequences larger for smaller bits. Some compression algorithms are to completely remove the information in order to get a smaller file size. After the algorithm used, files can be sufficiently reduced in terms of its original size.

Are there different systems?
If the investment of compression, decompression, an exact copy of the original data generated then the compression is lossless. The other kind, the lossy compression, usually applied to image data, does not allow reproduction of an exact replica of the original image, but has a higher compression ratio. Thus lossy compression allows only an approximation of the original to be regenerated.

What is lossy compression?
Lossy compression reduces delete files of data bits that are not necessary, fortunately. MP3 is a system of this type, which is based on the way the brain interprets audio and uses various tricks to produce something which sounds almost the same but actually as much as 90% of the missing data. Another lossy system is JPG. It is designed to provide high compression for images. For example, in an image, a landscape with a blue sky, all contain slightly different shades of green and blue are eliminated. The nature of the data is not lost because the basic colors are still present. Large parts of the image are colored, perhaps even whole lines or surfaces, but the image remains the same to the human eye.
What is lossless compression?
Lossless compression is a compression type without reducing the file size information. The original file can be restored exactly the same when decompressed. These algorithms create reference points for patterns, store them in a table and the table with the encrypted file sent now smaller. When decompressed, the file is re-created the reference article was replaced by the original information.

If lossless compression to use?
Lossless compression is ideal for documents containing text and numerical data in case of loss of information can be tolerated. ZIP compression, for instance, is lossless compression, patterns and replaces them with a single character (plus an indicator) recognizes. This is based on the fact that most files contain a lot of space or repetitive data. As an example, the observation that you are now reading this text, the word compression appears again and again, each with 11 bytes of storage (one for each letter). A compression system Reviews This and after the first occurrence, rather than store the actual word, an indicator of a byte is stored to indicate that a repeat word plus a byte to indicate that the word is. The result is that each occurrence of compression should now 2 bytes and 11, a saving of nine bytes and more than 80% of the space for this word. If you repeat this process for the 256 most commonly used words, you can make a difference in file size. If you unzip the file, the decompression program finds these codes to be repeated words and provides complete words in their place thus restoring the document to its original size and content.

What are the results?
The success of data compression depends largely on the data itself, as some data types are inherently more elastic than others. In general, some of the elements within the data are more common than others, and most compression algorithms exploit this property, known as the name of redundancy. More redundancy in the data, the most successful data compression is. In this sense, the digital video is a high level of redundancy, which makes it very suitable for compression.

Apparatus (software or hardware), the compressed data as an encoder or coder is more commonly known, while a device, decompresses the data referred to as a decoder. A device which serves both as an encoder and decoder called codec. Numerous compression techniques have been developed, and some techniques without loss can be applied to any type of data. In recent years, development of lossy techniques, that is designed specifically for image data contributed to the realization of digital video applications. Okay, so far for the compression in general, but what compression on binaries?

software compression
As already mentioned, it must be autonomous a compressed (or DLL) executable. Therefore, you must be a self-extracting compressed data, which is packed with the decompression code into an executable file. Thus there is no need to run a separate program compressed executable file. This decompression code that is added to the compressed data, often called the decompression stub. It means essentially a compressed executable career that the decompression stub unpacks the original executable code before passing control to rebuild the original binary. The effect is the same as if the original executable had taken. For the casual user, compressed and decompressed executables are indistinguishable.

What is the package?
The act of an executable file or DLL is compressed often referred to as packing, a typical name for an executable compressing program then becomes a compression program. Most packed executables decompress directly in memory and need no space extfs to start. However, some decompressor stubs are known to the uncompressed executable to start writing to the file system.

Why Packer?
Software distributors use executable compression for a variety of reasons, to reduce storage requirements especially software. Executable compressors are specifically designed to compress executable code developed, so often better compression than standard compression program data are obtained. Software compression allows distributors within the limits of their chosen distribution medium (CD, DVD, …) to stay, or customers time and reduce the required software on bandwidth Internet access sold. There is also another reason for compression: executable compression is also frequently used to prevent reverse engineering or hide the contents of the executable by proprietary methods of compression and / or encryption added. The malware is known to be compressed in most cases, to hide their presence from antivirus scanners. Executable compression can be used to prevent direct reduction to change literals and string signatures mask. However, compression is not executable the possibility of eliminating reverse engineering can only slow down the process. In general, the only compression is totally inappropriate is to avoid cracking, much more reliable are protectors for that purpose.

Is the compressed executable slower?
A compressed software requires less storage space in the file system so that less time file system data in the memory map. On the other hand, it will take some time to decompress the data before the start of the execution. However, the speed has not kept several storage media processor pace with average speed, so the storage is very often the bottleneck. Thus the compressed executable is faster on most common systems load. It’s a bit theoretical, but also on computers modern desktop, this is usually not noticeable when the executable is unusually big, so loading speed is not the main reason for or against an executable file to compress. software compression without saving the effort allows more programs in the same room, which will be manually unzip the saved file, the user each time you use the software.

And for 64-bit systems (x64)?
Data compression of 32 bit or 64 bit is obviously exactly the same for both systems. In addition, compression of 32 and 64 bit executables results in comparable proportions. In fact, a rule that everything has been said also especially true for 64-bit software. Though sizes between 32 and 64 bits of slightly different to the original executable for the 32-bit system software, there is often better for compressing 64-bit software relationship as there are more of the same models for this software (only the same number of bits and there bytes for both). This makes it even more desirable 64-bit software for reasons of reduction of time and space compared to the 32-bit software to compress.

Data recovery NAND and NOR Flash

NAND or NOR Flash memory:

Flash memory is given a non-volatile memory slot includes technology to erase and reprogram electrically. As non-volatile these data chips, even in the absence of power and are renowned for fast reading and better kinetic shock resistant qualities to save. NAND devices are transmitted serially by the same eight pins access control, address and data. NAND flash was introduced in 1989 by Toshiba.

NAND and NOR Although the memory chip function are different still widely used in various electronic devices where data storage, erasing and reprogramming are essential. . Both invented by Fujio Masuoka DR instead of Toshiba work, the main objective was established obsolete old storage devices that functions under magnetic energy like: hard disks and tapes by reducing the cost per bit and increase the capacity of the chip.

Therefore, NAND flash is widely used in MP3 players, digital cameras and USB memories with higher storage capacity is crucial. But, some devices like pocket PC can be used simultaneously for both types of flash memory. Use these NAND and NOR generally equipment for storage to boot the operating system.

Reader memory and error recovery data:

Data recovery is a highly specialized science, which is the NAND time to rescue data equally valid and NOR flash memory. Although NOR is rarely used and is quite expensive, NAND is widely preferred for mass storage and a huge godsend for thin and smaller devices.

The probability of data loss is elevated when bad memory used: unbranded USB drives, NAND wafer renamed devices, etc. These devices, and unbranded memory chips are often noticed in spontaneous reboots leading to program failure. Defective RAM is often noticed to write correct data in an incorrect position of the unit, which eventually creates a malfunction of data and data loss. In addition, defective memory by mining of hard drives can cause your device logical ability can eventually destroy, the more extreme: a system or a boot device.

Data recovery or NAND data restoration can minimize your losses and protect your system from incurable diseases; It is strongly recommended buying branded USB drives without, SD cards, USB flash drives, CompactFlash cards and other unnamed devices for storing important data to avoid recommended. NAND recovery is possible but very complicated.

First aid before data recovery:

Data loss is a common problem that we often face due to virus attacks in our USB drives or physical or logical when our computers used turn. To solve these problems data recovery data degradation is a unique solution that you and your data can save you from the nightmare of any permanent loss. But experts believe that data loss is often a complicated task when a user ignores the safety measures that further data loss to be avoided. Therefore, in the event of data loss should implement some security measures.

• confirmed case of data loss, another method should not be made to store the data. This can result in loss of data much more serious.
• It is better to help experts in action and other corrective contact.
• Using software like: scandisk should be completely avoided.
• Avoid unnamed or renamed devices.
• Sellers on eBay are not always honest eProvided ™ has learned many times that customer data is stored on fake NAND wafers, an SD card can purchase up to 16GB some give 1 GB damaged wafers NAND interior,

Data recovery of hard drives and NAND memory:

data recovery processes include various techniques with latest technologies. First, the experts determine the product failure and range of levels 1 and 2 If the device falls into level 1, which means that the device suffers from a component and logically damaged costs data recovery usually cheaper compared with the level. 2 However, if the device fails Layer 2 device is physically damaged and may have damaged many internal circuits, experts need circuits and recreation; The process is usually more expensive.

After determining the level 1 or level 2, an imaging device is saved and these images are to avoid on a separate server to move an accidental loss. Then the original copy of the image for data recovery in the first grade in a laboratory is transported performed with several utilities, alterations can include repair the device directly from wafers NAND dumping, the circuit repair, glass, etc. Several other attempts must also be made, consisting in testing the computer and restore the logical damage.

If the first degree recovery fails, the harder the task, and the second is the degree of recovery is considered to contain high practical techniques and experienced experts in critical uses of software and hexadecimal formulas. These software utilities are used to re-create the drive and manipulate other drive errors.

When physical defects such as engine replacement heads, repair or cleaning scratches different cleaning liquids and other spare parts are ordered by size and code. After each attempt, physical and logical recovery of data stored on another volume copies of CD / DVD, flash drives and hard drives.

Related posts:

About the Author

Leave A Response