Skip to content

tleyden/arty

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

90 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🎙️ Arty - iOS realtime voice assistant w/ translation + connectors

TestFlight Ask DeepWiki Snyk OSSF Scorecard CodeRabbit Pull Request Reviews

Features

  • Realtime Voice Assistant, fully interruptible
  • Realtime Translator/Interpreter mode, including bi-directional translations.
  • Extensions/connectors: DeepWiki MCP, Google Drive, GitHub, Hugging Face Papers, Hacker News, Web Search
  • Generic MCP connector (experimental, still WIP)
  • Works with 🔈 speaker or 🎧 headphones
  • Customizable prompts: Edit system and tool prompts directly from the config UI

Tech Stack

  • WebRTC
  • Native Swift w/ echo cancellation support
  • Expo / React Native
  • OpenAI gpt-realtime for voice assistant mode
  • gpt-realtime-translate for translation mode

Requirements

  • iOS - tested on ios 26, may work on earlier versions
  • Bring Your Own OpenAI API Key

📱 Screenshots

Voice chat home screen
Voice chat (home screen)
Realtime Translation/Interpreter
Realtime Translation/Interpreter
Configure connectors screen
Configure connectors

▶️ Install it via TestFlight

Join the TestFlight beta

Test Flight Installation

Security note: TestFlight builds are compiled binaries; do not assume they exactly match this source code. If you require verifiability, build from source and review the code before installing.

🔐 Security + Privacy

Privacy status: Currently, the app uses OpenAI's API, which means user prompts and connector content are transmitted to OpenAI by design. Your credentials (API keys, OAuth tokens) never leave your device and are stored securely in iOS Keychain. Future updates will add support for self-hosted and fully local execution options.

🛠️ Building from source

Installation steps

Clone project and install dependencies

git clone https://github.com/vibemachine-labs/arty.git
cd arty
curl -fsSL https://bun.sh/install | bash
bun install

Run the app

To run in the iOS simulator:

bunx expo run:ios

⛓️‍💥 If you get CommandError: No iOS devices available in Simulator.app, it means you need to install the iOS platform in Xcode. Go to Xcode > Settings > Components and install the iOS platform.

⚠️ Audio is flaky on the iOS Simulator. Using a real device is highly recommended.

To run on a physical device:

bunx expo run:ios --device
Editing Swift code in Xcode

Open Xcode project

To open the project in Xcode:

xed ios

In Xcode, the native swift code will be under Pods / Development Pods

Misc Dev Notes

Create a Google Drive Client ID (Optional)

When building from source, you will need to provide your own Google Drive Client ID. You can decide the permissions you want to give it, as well as whether you want to go through the verification process.

Google API Instructions

For testing, the following oauth scopes are suggested:

  1. See and download your google drive files (included by default)
  2. See, edit, create, and delete only the specific Google Drive files you use with this app

Development notes

  • Project bootstrapped with bunx create-expo-app@latest .
  • Refresh dependencies after pulling new changes: bunx expo install
  • Install new dependencies: bunx expo install <package-name>
  • Allow LAN access once: bunx expo start --lan

Run on iOS device via ad hoc distribution

  1. Register device: eas device:create
  2. Scan the generated QR code on the device and install the provisioning profile via Settings.
  3. Configure build: bunx eas build:configure
  4. Build: eas build --platform ios --profile dev_self_contained

Clean build

If pods misbehave, rebuild from scratch:

bunx expo prebuild --clean --platform ios
bunx expo run:ios

📦 Expo Build

Build Steps

Additional Deps

brew install fastlane

Expo login

bunx eas login

Setup Apple Dev Account

bunx eas credentials

Run build wizard

bun run wizard

⚙️ Technical Details

Architecture overview

Native Swift WebRTC Client

React Native WebRTC libraries did not reliably support speakerphone mode during prototyping. The native Swift implementation resolves this issue but adds complexity and delays Android support.

Connector Architecture

All connectors use statically defined tools with explicit function definitions, providing reliability and predictable behavior. Examples include Google Drive file operations, DeepWiki documentation search, Hacker News browsing, and Daily Hugging Face Top Papers discovery.

MCP Support

MCP (can be enabled in settings): Connects to external MCP servers via a streamable HTTP endpoint.

Note: OAuth token refresh is implemented but still very buggy.

Web Search

GPT-4 web search serves as a temporary solution. The roadmap includes integrating a dedicated search API (e.g., Brave Search) using user-provided API tokens.

Voice / Text LLM backend

OpenAI is currently the only supported backend. Adding support for multiple providers and self-hosted backends is on the roadmap.

📬 Contact & Feedback

  • Email/Twitter: Email or Twitter/X via my Github profile.
  • Issues, Ideas: Submit bugs, feature requests, or connector suggestions on GitHub Issues.
  • Responsible disclosure: Report security-relevant issues privately via email using the address listed on my Github profile before any public disclosure.

About

iOS realtime voice assistant w/ translation + connectors

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors