About the invention Recursive data compression described in Patent 5,488,364, illustration reduces the number of symbols in a string by ninety-eight percent (98%)
SUPPLEMENTED VERSION: A LITTLE REVERSE ENGINEERING AND COMPUTABLE REDUCIBILITY MODELS ANALOGOUS TO LOSSLESS RECURSIVE DATA COMPRESSION DESCRIBED IN EXPIRED* U.S. PATENT 5,488,364
continued ... Recursive data compression using a recursive structure
Generalized recursive data compression using a recursive structure
Analysis of the Recursive data compression method
More Analytical results and Recursive data compression claims
News Headlines and News Today Search
Newsroom Automation Search
Recursive, Statistically Random Data Compression And Creating Algorithmic Incompressible Data
Everything Random Search
Appendix A - Patents which reference Patent 5,488,364
A LITTLE REVERSE ENGINEERING AND COMPUTABLE REDUCIBILITY MODELS ANALOGOUS TO LOSSLESS ADAPTIVE MODEL RECURSIVE DATA COMPRESSION DESCRIBED IN EXPIRED* U.S. PATENT 5,488,364
ANALYSIS OF QUASI-OPTIMUM COMPRESSION USING PATENTED ADAPTIVE MODEL RECURSIVE DATA COMPRESSION WITH VUSP SUB-STRUCTURE NETS 153,000 PERCENT MORE RECURSIVELY COMPRESSIBLE *-ZIP FILES.
The Recursive data compression patent 5,488,364 describes an original adaptive model for data compression, it teaches a method of adapting the data to fit the data compressor. Abstractly, that method is to algorithmically configure a predetermined criteria as a boolean operation key and to leverage that selective configuration in a manner indicative of encrypting but instead of encrypting the information, the method reorders the input data by directing the contextual rearrangement (intelligent morphing) of the input data into binary permutation blocks. When the intelligently designed morphed permutation blocks are combined with the boolean operation keys they result in reversibly ordered combinational blocks. As the sole inventor, I transferred fifty-percent interest in the Recursive data compression method in 1994, prior to it being patented. The transfer was an exchange viewed (by me) as a way to procure patent protection for my invention. However, I discovered in 2006 that the patent had expired in 2000, because the assignee of the patent had decided to not pay the fees to maintain the patent, thereby, rendering my investment worthless. So I have worked alone on the ideas expressed here since 2007, in a reinvestment of my individual effort.
An analysis of Recursive data compression (using the most primitive 8-bit keyword structure, no Sub-Structure (Illustration 1.g, 1.h) and utilizing only *-zip files) revealed that for every uncompressed file that compresses (at least three bytes using *-zip) there are 255 other file permutations which will recursively compress equally (+2 bytes).
Despite the fact that *-zipping a file ("a") which is zero bytes in length results in a *-zip file a minimum of twenty-two bytes in length, and despite the fact that I consider that to be a prima facie argument that all *-zip files are in theory compressible, I did not interpose any adaptation for that into this equational analysis.
Notwithstanding, there was an aggregate increase of 25,500 (255 * 100% = 25,500%) percent more recursively compressible *-zip files observed as two-hundred-fifty-six individually permuted files for each individually compressed zip file that recompresses when using Recursive data compression. Additionally, for each *-zip file that re-compresses (by at least three bytes using *-zip) there are an additional 255 file permutations which will recursively compress equally (+2 bytes). i.e., If an arbitrary proportional assignment of one-percent of N length *-zip files can be re-compressed by at least three bytes using *-zip then the number of file permutations that can be compressed by at least 1 byte is two-hundred-fifty-five percent, (255 * 100% * 1% = 255%) or is equivalent to 2.55 times the aggregate number of all *-zip files which are N bytes in length. That is equal to another increase of 25,500 percent or two-hundred-fifty-six individually permuted files for each individually compressed zip file that recompresses.
Using the reverse engineering in this example allows the statistics to be traced. i.e., Assume a one megabyte file has a *-zip image which is a quarter-megabyte in length. Then 256 copies of the *-zip image are made, each with a unique 8-bit keyword appended to it, plus one optional ancillary byte. Each new quarter-megabyte file would then represent one (of 256) unique permutation of the original file. The result is a corresponding compression ratio of 4:1 for each of these 256 files.
Therefore, if *-zip can compress an arbitrary proportional assignment of one-percent (1%) of all uncompressed files N bytes in length (for every N) by at least three bytes then the least number of Recursive *-zip files that can be compressed at the same compression ratio using "Recursive data compression" is two-hundred-fifty-five percent,(255 * 100% * 1% = 255%) or is equivalent to 2.55 times the aggregate number of all *-zip files the size of N. The ratio is not changed when using a more realistic proportional assignment than one-percent.
In addition, if these *-zip files re-compress (referenced above) then there are at least 65534 extra compressible file permutations (not 255) in individual relation to these files, provided that they re-compressed by at least three bytes even if these files will not re-compress again.
|Illustration 1.g Sub-Structure IFSP - - Analysis (above) of quasi-optimum compression using IFSP Sub-Structure nets (3*25500) 76,500 percent more Recursively compressible *-zip files.|
Illustration 1.h Sub-Structure VUSP - Analysis (above) of quasi-optimum compression using VUSP Sub-Structure nets (6*25500) 153,000 percent more Recursively compressible *-zip files.
The following definition is an abstract of how a method built on a secondary compressibility model was conceived (as I currently view it) from the computable algorithmic perspective. It illustrates how one model of structure consolidation in order to achieve computable reducibility is analogous in structure to "Recursive data compression."
|illustration 4.g - Equational analysis of quasi-optimum compression for the recursive computable reducibility modelPartition delineated for 4 bits (below).|
|illustration 4.a (above)|
|Computable reducibility model of recursive structure consolidation delineated for 4 bits by illustration 4.a - 4.b in contrast with "Recursive data compression" secondary compressibility model (refer to illustration 1.a).|
The computable reducibility model of structure consolidation is implemented by recursively splitting binary segments as a transform mechanism and in its simplest form is analogous to the created redundancy strategy implemented by the recursive data compression secondary compressibility model when the created redundancy is (in part) the result of the lossless conversion of n symbols to n-1 (minus one or greater) symbols.
Conversely, when the created redundancy of the secondary compressibility model is relied upon (in part) as the result of the lossless conversion of n symbols to n+1 (plus one or greater) symbols it becomes clear that the computable reducibility model of structure consolidation is analogous to (and must be implemented at the time of) keyword formulation. Therefore, I suggest that both sides of the analogy should be taken under advisement when devising variations to algorithms for formulating the keyword for Recursive data compression.
Because it is possible to transform a series of fixed length binary symbols into a derivative series, which consists of symbols having the identical binary length as the original series but which utilizes less symbols than existed in the original series.
This can be done without any information loss simply by making fundamental adaptations to the input of a transform algorithm in an effort to trend the data to meet the optimal conditions of a specific data compression invention. With regard to data compression algorithms which apply some form of the run-length encoding system these trends are described as adapting the data to be "conducive to compression algorithms that exploit bit redundancy."
|Computable reducibility model of structure consolidation for 8 bits (delineated for 4 bits by illustration 4.a - 4.b) in contrast with "Recursive data compression" secondary compressibility model (illustration 1.a).|
As described in U.S. Patent 5,488,364, Recursive data compression, recursively splitting binary segments as a transform mechanism can create redundancy and as stated above this created redundancy is in part the result of the lossless conversion of n symbols to n-1 (minus one or greater) symbols. But as described in U.S. Patent 5,488,364 you can also recursively compress a file using *-zip where the file compresses (a program reported) one-hundred percent and the resulting *-zip file will also compress another (program reported) seventy to eighty-plus percent. This is accomplished by escalating the number symbols (n+1 or greater). This is more difficult to rationalize but the principles are the same. Therefore, the first subject will be, recursively compressing binary subdivisions that compress because they consist of less symbols.
One symbol consists of a unique fixed-length string of binary digits. A block of symbols constitutes binary subdivision. In this scenario the resulting binary subdivisions compress because of they are less complex because they consist of less symbols.
In the table below where column "A" holds numeral values greater than one. The relationship that binds the terms in column "B" to the corresponding value in the column "A" is formed by their equivalence (the sum of their parts). There are far fewer values in column "A" than there are (formed by sets) in column "B." This suggests that there are far fewer irreducible values (the sum of the terms) than there are reducible (non-random) values (the composition of the terms that equal the sum).
The structure above is "a partition" and Peter Gustav Lejeune Dirichlet's (1805-1859) "Pigeonhole Principle." (Also known as "Dirichlet's Principle", "Dirichlet's Drawer Principle" and "Dirichlet's Letter Box Principle") communicates that the partition of a set of "n" objects into fewer than "n" (single object) mutually exclusive subsets is not possible.
Arithmetic based partition structures and recursive compression may seem to have no place in modern reducibility theory, especially, since the use of arithmetic partitions based on the sum of their parts will propose an inconsistent presumption as to the number of irreducible values (random incompressible strings). A suitable homogeneous remedy for this would be to provide the same structure using terms that produce a product instead of a sum.
The product table produces more irreducible values (random incompressible strings) 50/50 as anticipated. This mirrors the notions of contemporary algorithmic theories of reducibility.
To see exactly how all this works with illustrated examples, please read about reducing the number of symbols in a string using lossless recursive data compression.
When the example uses n=4 the correspondence results in 12 extra members.illustration 4.b (above)
Additional information about the invention described in expired* Patent 5,488,364:
Description information from the website of The World Intellectual Property Organization (WIPO) an agency of the United Nations.Information about the patent claims from The World Intellectual Property Organization (WIPO) website.Bibliographical information from The World Intellectual Property Organization (WIPO) website.
Expired* Patent 5,488,364 Document is referenced by:
1 - 8,407,239, Multi-stage query processing system and method for use with tokenspace repository
2 - 8,364,836, Withdrawn, System and methods for accelerated data storage and retrieval
3 - 8,326,984, Selective compression for network connections
4 - 8,321,445, Generating content snippets using a tokenspace repository
5 - 8,275,909, Adaptive compression
6 - 8,275,897, System and methods for accelerated data storage and retrieval
7 - 8,117,173, Efficient chunking algorithm
8 - 8,112,619, Systems and methods for accelerated loading of operating systems and application programs
9 - 8,112,496, Efficient algorithm for finding candidate objects for remote differential compression
10 - 8,090,936, Systems and methods for accelerated loading of operating systems and application programs
11 - 8,073,926, Virtual machine image server
12 - 8,073,047, Bandwidth sensitive data compression and decompression
13 - 8,054,879, Bandwidth sensitive data compression and decompression
14 - 8,024,483, Selective compression for network connections
15 - 8,010,668, Selective compression for network connections
16 - 7,882,084, Compression of data transmitted over a network
17 - 7,873,065, Selectively enabling network packet concatenation based on metrics
18 - 7,849,462, Image server
19 - 7,783,781, Adaptive compression
20 - 7,777,651, System and method for data feed acceleration and encryption
21 - 7,714,747, Data compression systems and methods
22 - 7,705,753, Methods, systems and computer-readable media for compressing data
23 - 7,613,787, Efficient algorithm for finding candidate objects for remote differential compression
24 - 7,555,531, Efficient algorithm and protocol for remote differential compression
25 - 7,370,120, Method and system for reducing network latency in data communication
26 - 7,321,952, System and method for data phase of memory and power efficient mechanism for fast table lookup
27 - 7,296,114, Control of memory and power efficient mechanism for fast table lookup
28 - 7,296,113, Memory and power efficient mechanism for fast table lookup
29 - 7,292,162, Data coding system and method
30 - 6,301,394, Method and apparatus for compressing data
31 - 6,008,657, Method for inspecting the elements of piping systems by electromagnetic waves
32 - 5,977,889, Optimization of data representations for transmission of storage using differences from reference data
33 - 5,666,560, Storage method and hierarchical padding structure for direct access storage device (DASD) data compression
34 - 5,627,534, Dual stage compression of bit mapped image data using refined run length and LZ compression
Continue, illustration of Recursive data compression utilizing an invented recursive structure reduces the number of symbols in a string by ninety-eight percent (98%)
The Random - Incompressible Data Custom Search Engine