Skip to content

[Feature] Intelligent Client-Side Pipeline: Compression & Chunking to Optimize Attachment Limits #2130

@tsetmyzn-droid

Description

@tsetmyzn-droid

What feature would you like?

I have searched the existing issues
[x] I have searched the existing issues and this is not a duplicate.

Is your feature request related to a problem? Please describe.
Yes. The current 10MB attachment limit is a primary constraint for users sharing high-quality documents, logs, or archives. While the limit is necessary to protect the Service Node network from storage and bandwidth exhaustion, it limits the app's utility in professional and technical environments.

Describe the solution you'd like
I propose an Intelligent Pre-Transmission Pipeline implemented in the Android client. This pipeline will "virtually" expand attachment capacity while staying within the physical constraints of the network.

The Pipeline Workflow:
Smart File Inspection:

The client checks the file's MIME type and entropy.

Compressible Files (Docs, TXT, PDF, JSON): Apply Gzip/Zlib compression before encryption.

High-Entropy Files (Media, Encrypted Archives): Skip compression to save resources and proceed to Chunking.

Client-Side Chunking:

If a file (post-compression) exceeds 10MB, the client splits it into encrypted Chunks (e.g., a 25MB file split into 3 blobs of ~8.4MB).

Each chunk is uploaded as a standalone blob to the File Server.

Manifest-Based Delivery:

The Onion-routed control message will contain a JSON Manifest listing all chunk URLs, SHA-256 hashes for integrity, and the master AES-GCM decryption key.

The recipient client reconstructs the file in a background stream.

Technical Implementation & Resource Efficiency (Android)
Low-Resource Compatibility: Since I am targeting devices with limited RAM (3.5GB), the implementation will strictly use Streaming APIs (CipherOutputStream, GZIPOutputStream, and File Channels). This ensures a constant, low memory footprint regardless of file size.

Disk-to-Network: Data will be processed directly from the File URI to the Network Socket, avoiding JVM Heap overflows.

Describe alternatives you've considered
Increasing Node Limits: Rejected to avoid burdening Service Node operators.

P2P Transfer: Rejected to maintain IP anonymity through the Onion network.

Additional Context
I am interested in contributing to the implementation of this pipeline on the Android client. I would appreciate guidance from the core team on the current AttachmentUploadJob and NetworkLayer.kt structure to ensure the best integration.

Anything else?

To address potential concerns regarding this proposal:

Security (Compression side-channels): Compression is performed strictly before AES-GCM encryption. This prevents any metadata leakage or side-channel attacks (like CRIME/BREACH) since the ciphertext remains high-entropy and the compression happens on the plaintext in a secure client-side environment.

Server-Side Impact: This approach requires zero changes to the existing File Server or Service Node protocol. From the server's perspective, these are just standard 10MB blobs. The logic is entirely handled by the client-side "Manifest" interpretation.

Network Reliability: By using Chunking, we actually improve reliability on unstable networks. If one 8MB chunk fails to upload, the client only needs to retry that specific chunk rather than restarting a single massive 25MB upload.

Recipient Experience: For backward compatibility, the manifest can be wrapped in a way that older clients display a fallback message (e.g., "Attachment format not supported, please update Session"), preventing any app crashes.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions