A privacy-preserving Android app that performs real-time grammar correction across all apps using on-device LLM inference via ExecuTorch.
- System-wide grammar correction - Works in any app
- On-device processing - All processing happens locally, no data leaves the device
- Floating overlay - Quick access to grammar correction with a single tap
- Privacy-first - No internet connection required, no user data stored
User selects text
↓
AccessibilityService detects selection
↓
Overlay shows pen icon
↓
User taps icon
↓
KeyboardThread → GrammarService → ExecuTorch LLM
↓
Corrected text replaces selected portion
- Language: Kotlin
- Min SDK: 30 (Android 11)
- Target SDK: 30
- ML Framework: ExecuTorch Android 1.1.0
- Model: Qwen3 0.6B (472 MB)
- Integration: Android AccessibilityService
- Android Studio Artic Fox | Giraffe or later
- Android SDK 30+
- JDK 17
- 8GB+ RAM available on device (for model inference)
- Gradle 8+ (project uses 8.13)
- Clone the repository:
git clone https://github.com/[username]/minigram.git
cd minigram- Build the APK:
./gradlew assembleDebugThe APK will be available at app/build/outputs/apk/debug/app-debug.apk (~525 MB)
- Install on your Android device:
adb install app/build/outputs/apk/debug/app-debug.apkOr transfer the APK to your device and install from file manager.
-
Enable Accessibility Service:
- Open Settings → Accessibility
- Find "MiniGram"
- Enable the service
-
Grant Overlay Permission:
- Follow the on-screen prompts to grant SYSTEM_ALERT_WINDOW permission
-
Select text in any app:
- Long-press to select text
- MiniGram overlay icon appears
-
Tap the overlay:
- Model processes text locally
- Corrected text replaces selected text automatically
The app relies on accessibility events. Some apps may trigger edge cases with selection indices. If crashes occur, restart the app and try selecting text again.
- Check that AccessibilityService is enabled in Settings → Accessibility
- Verify overlay permission is granted
- Check logs for "MiniGram" tag to see processing errors
The first inference takes longer as Executors loads the model:
- Wait 30-60 seconds after first tap
- Subsequent taps will be faster (~3-5 seconds)
minigram/
├── app/
│ └── src/main/
│ ├── java/com/minigram/
│ │ ├── GrammarAccessibilityService.kt
│ │ ├── GrammarService.kt
│ │ ├── OverlayView.kt
│ │ ├── TextReplacer.kt
│ │ └── MainActivity.kt
│ ├── res/
│ │ ├── assets/models/
│ │ │ ├── qwen3_0.6B_model.pte (472 MB)
│ │ │ ├── tokenizer.json (11 MB)
│ │ │ └── vocab.json (2.7 MB)
│ │ ├── mipmap-*/ic_launcher.png
│ │ └── xml/accessibility_service_config.xml
│ └── AndroidManifest.xml
├── .gitignore
├── build.gradle.kts
├── settings.gradle.kts
└── README.md
- GrammarAccessibilityService: Monitors text selection events
- GrammarService: Manages LLM model and handles inference
- OverlayView: Floating UI with pen/rotate icon for busy state
- TextReplacer: Replaces selected text with corrected text
- Debug:
./gradlew assembleDebug - Clean build:
./gradlew clean && ./gradlew assembleDebug
- All processing happens on device
- No data leaves the device
- No analytics or telemetry
- No cloud APIs or services
- ExecuTorch - On-device ML inference framework
- Qwen3 - Qwen3 0.6B model
