If only we had infinite compute power so we wouldn't even need to store data. Just regenerate it from scratch through bruteforcing every permutation till you reach satasfactory result.
how well does this scale to large random encrypted data? meaning that theres no feasible bit patterns due to randomness, or are there other compression methods for encrypted data or data with "entropy"
@@KassiopeiaYT great question! (Pseudo)random data is by its nature not very compressible. Given a specific piece of 'random' data, different compression methods *may* yield nontrivial compression ratios, but it's coincidental when this is the case. One example is the Bitcoin blockchain; hundreds of gigabytes of mostly signatures, public keys, and hashes, results in poor compression ratios. Mixed in are compressible bits, like arbitrary data stored in OP RETURN, but there's a good reason no compression mechanism is included in block storage despite chain growth being a serious concern. Specifically, with *encrypted* data, the advice is always to compress the data prior to encryption. But if you have large amounts of encrypted data for which you don't hold the keys, you're pretty much out of luck. In the case of, say, end to end encrypted messaging, the work of compression, where it's performed, is offloaded to clients. This can be limiting, as there's an inherent tradeoff between how much data you're compressing at once, allowing for more patterns to emerge, and how much data you're willing to send down and re-ingest, if you're a third party storing end to end encrypted data. There *are* some techniques with homomorphic encryption; schemes designed to allow certain mathematical operations on encrypted data, but they're fairly novel and I honestly can't speak to their viability in real world scenarios.
I have ALWAYS wondered this. So good for it to appear in my recommendations. Instant watch
Top video
Another banger video. ✈✈
the emojis
Keep up the work!!
fire
If only we had infinite compute power so we wouldn't even need to store data. Just regenerate it from scratch through bruteforcing every permutation till you reach satasfactory result.
That would definitely be one solution 😅
Hope my bank database has this feature so I can generate infinite amount 😂
how well does this scale to large random encrypted data? meaning that theres no feasible bit patterns due to randomness, or are there other compression methods for encrypted data or data with "entropy"
@@KassiopeiaYT great question! (Pseudo)random data is by its nature not very compressible. Given a specific piece of 'random' data, different compression methods *may* yield nontrivial compression ratios, but it's coincidental when this is the case. One example is the Bitcoin blockchain; hundreds of gigabytes of mostly signatures, public keys, and hashes, results in poor compression ratios. Mixed in are compressible bits, like arbitrary data stored in OP RETURN, but there's a good reason no compression mechanism is included in block storage despite chain growth being a serious concern.
Specifically, with *encrypted* data, the advice is always to compress the data prior to encryption. But if you have large amounts of encrypted data for which you don't hold the keys, you're pretty much out of luck. In the case of, say, end to end encrypted messaging, the work of compression, where it's performed, is offloaded to clients. This can be limiting, as there's an inherent tradeoff between how much data you're compressing at once, allowing for more patterns to emerge, and how much data you're willing to send down and re-ingest, if you're a third party storing end to end encrypted data.
There *are* some techniques with homomorphic encryption; schemes designed to allow certain mathematical operations on encrypted data, but they're fairly novel and I honestly can't speak to their viability in real world scenarios.
hello youtube